00:00:00.000 Started by upstream project "autotest-per-patch" build number 132368 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.158 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.222 Using shallow fetch with depth 1 00:00:00.222 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.222 > git --version # timeout=10 00:00:00.268 > git --version # 'git version 2.39.2' 00:00:00.268 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.295 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.295 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.420 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.430 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.444 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.444 > git config core.sparsecheckout # timeout=10 00:00:06.455 > git read-tree -mu HEAD # timeout=10 00:00:06.472 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.490 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.490 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.572 [Pipeline] Start of Pipeline 00:00:06.589 [Pipeline] library 00:00:06.591 Loading library shm_lib@master 00:00:06.591 Library shm_lib@master is cached. Copying from home. 00:00:06.609 [Pipeline] node 00:00:06.619 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:06.621 [Pipeline] { 00:00:06.628 [Pipeline] catchError 00:00:06.629 [Pipeline] { 00:00:06.639 [Pipeline] wrap 00:00:06.647 [Pipeline] { 00:00:06.656 [Pipeline] stage 00:00:06.658 [Pipeline] { (Prologue) 00:00:06.689 [Pipeline] echo 00:00:06.691 Node: VM-host-SM0 00:00:06.697 [Pipeline] cleanWs 00:00:06.709 [WS-CLEANUP] Deleting project workspace... 00:00:06.710 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.730 [WS-CLEANUP] done 00:00:06.914 [Pipeline] setCustomBuildProperty 00:00:06.995 [Pipeline] httpRequest 00:00:07.841 [Pipeline] echo 00:00:07.843 Sorcerer 10.211.164.20 is alive 00:00:07.852 [Pipeline] retry 00:00:07.854 [Pipeline] { 00:00:07.867 [Pipeline] httpRequest 00:00:07.871 HttpMethod: GET 00:00:07.872 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.872 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.892 Response Code: HTTP/1.1 200 OK 00:00:07.892 Success: Status code 200 is in the accepted range: 200,404 00:00:07.893 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.948 [Pipeline] } 00:00:14.966 [Pipeline] // retry 00:00:14.974 [Pipeline] sh 00:00:15.255 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.273 [Pipeline] httpRequest 00:00:15.787 [Pipeline] echo 00:00:15.789 Sorcerer 10.211.164.20 is alive 00:00:15.798 [Pipeline] retry 00:00:15.800 [Pipeline] { 00:00:15.813 [Pipeline] httpRequest 00:00:15.817 HttpMethod: GET 00:00:15.818 URL: http://10.211.164.20/packages/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:00:15.818 Sending request to url: http://10.211.164.20/packages/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:00:15.843 Response Code: HTTP/1.1 200 OK 00:00:15.844 Success: Status code 200 is in the accepted range: 200,404 00:00:15.845 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:03:24.308 [Pipeline] } 00:03:24.324 [Pipeline] // retry 00:03:24.332 [Pipeline] sh 00:03:24.612 + tar --no-same-owner -xf spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:03:27.906 [Pipeline] sh 00:03:28.185 + git -C spdk log --oneline -n5 00:03:28.185 a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:03:28.185 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:03:28.185 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:03:28.185 bb53e3ad9 test/nvme/xnvme: Drop null_blk 00:03:28.185 ace52fb4b test/nvme/xnvme: Tidy the test suite 00:03:28.204 [Pipeline] writeFile 00:03:28.220 [Pipeline] sh 00:03:28.502 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:03:28.514 [Pipeline] sh 00:03:28.794 + cat autorun-spdk.conf 00:03:28.794 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:28.794 SPDK_TEST_NVME=1 00:03:28.794 SPDK_TEST_FTL=1 00:03:28.794 SPDK_TEST_ISAL=1 00:03:28.794 SPDK_RUN_ASAN=1 00:03:28.794 SPDK_RUN_UBSAN=1 00:03:28.794 SPDK_TEST_XNVME=1 00:03:28.794 SPDK_TEST_NVME_FDP=1 00:03:28.794 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:28.801 RUN_NIGHTLY=0 00:03:28.803 [Pipeline] } 00:03:28.817 [Pipeline] // stage 00:03:28.832 [Pipeline] stage 00:03:28.834 [Pipeline] { (Run VM) 00:03:28.848 [Pipeline] sh 00:03:29.129 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:03:29.129 + echo 'Start stage prepare_nvme.sh' 00:03:29.129 Start stage prepare_nvme.sh 00:03:29.129 + [[ -n 7 ]] 00:03:29.129 + disk_prefix=ex7 00:03:29.129 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:03:29.129 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:03:29.129 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:03:29.129 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:29.129 ++ SPDK_TEST_NVME=1 00:03:29.129 ++ SPDK_TEST_FTL=1 00:03:29.129 ++ SPDK_TEST_ISAL=1 00:03:29.129 ++ SPDK_RUN_ASAN=1 00:03:29.129 ++ SPDK_RUN_UBSAN=1 00:03:29.129 ++ SPDK_TEST_XNVME=1 00:03:29.129 ++ SPDK_TEST_NVME_FDP=1 00:03:29.129 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:29.129 ++ RUN_NIGHTLY=0 00:03:29.129 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:03:29.129 + nvme_files=() 00:03:29.129 + declare -A nvme_files 00:03:29.129 + backend_dir=/var/lib/libvirt/images/backends 00:03:29.129 + nvme_files['nvme.img']=5G 00:03:29.129 + nvme_files['nvme-cmb.img']=5G 00:03:29.129 + nvme_files['nvme-multi0.img']=4G 00:03:29.129 + nvme_files['nvme-multi1.img']=4G 00:03:29.129 + nvme_files['nvme-multi2.img']=4G 00:03:29.129 + nvme_files['nvme-openstack.img']=8G 00:03:29.129 + nvme_files['nvme-zns.img']=5G 00:03:29.129 + (( SPDK_TEST_NVME_PMR == 1 )) 00:03:29.129 + (( SPDK_TEST_FTL == 1 )) 00:03:29.129 + nvme_files["nvme-ftl.img"]=6G 00:03:29.129 + (( SPDK_TEST_NVME_FDP == 1 )) 00:03:29.129 + nvme_files["nvme-fdp.img"]=1G 00:03:29.129 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:03:29.129 + for nvme in "${!nvme_files[@]}" 00:03:29.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:03:29.129 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:03:29.129 + for nvme in "${!nvme_files[@]}" 00:03:29.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:03:29.129 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:03:29.129 + for nvme in "${!nvme_files[@]}" 00:03:29.129 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:03:29.388 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:03:29.388 + for nvme in "${!nvme_files[@]}" 00:03:29.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:03:29.388 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:03:29.388 + for nvme in "${!nvme_files[@]}" 00:03:29.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:03:29.388 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:03:29.388 + for nvme in "${!nvme_files[@]}" 00:03:29.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:03:29.388 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:03:29.388 + for nvme in "${!nvme_files[@]}" 00:03:29.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:03:29.388 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:03:29.388 + for nvme in "${!nvme_files[@]}" 00:03:29.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:03:29.647 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:03:29.647 + for nvme in "${!nvme_files[@]}" 00:03:29.647 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:03:29.647 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:03:29.647 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:03:29.647 + echo 'End stage prepare_nvme.sh' 00:03:29.647 End stage prepare_nvme.sh 00:03:29.658 [Pipeline] sh 00:03:29.937 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:03:29.937 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:03:30.195 00:03:30.195 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:03:30.195 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:03:30.195 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:03:30.195 HELP=0 00:03:30.195 DRY_RUN=0 00:03:30.195 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:03:30.195 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:03:30.195 NVME_AUTO_CREATE=0 00:03:30.195 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:03:30.195 NVME_CMB=,,,, 00:03:30.195 NVME_PMR=,,,, 00:03:30.195 NVME_ZNS=,,,, 00:03:30.195 NVME_MS=true,,,, 00:03:30.195 NVME_FDP=,,,on, 00:03:30.195 SPDK_VAGRANT_DISTRO=fedora39 00:03:30.195 SPDK_VAGRANT_VMCPU=10 00:03:30.195 SPDK_VAGRANT_VMRAM=12288 00:03:30.195 SPDK_VAGRANT_PROVIDER=libvirt 00:03:30.195 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:03:30.195 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:03:30.195 SPDK_OPENSTACK_NETWORK=0 00:03:30.195 VAGRANT_PACKAGE_BOX=0 00:03:30.195 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:03:30.195 FORCE_DISTRO=true 00:03:30.195 VAGRANT_BOX_VERSION= 00:03:30.195 EXTRA_VAGRANTFILES= 00:03:30.196 NIC_MODEL=e1000 00:03:30.196 00:03:30.196 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:03:30.196 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:03:33.543 Bringing machine 'default' up with 'libvirt' provider... 00:03:34.479 ==> default: Creating image (snapshot of base box volume). 00:03:34.738 ==> default: Creating domain with the following settings... 00:03:34.738 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732093229_f42fa50d9793b4b22aa5 00:03:34.738 ==> default: -- Domain type: kvm 00:03:34.738 ==> default: -- Cpus: 10 00:03:34.738 ==> default: -- Feature: acpi 00:03:34.738 ==> default: -- Feature: apic 00:03:34.738 ==> default: -- Feature: pae 00:03:34.738 ==> default: -- Memory: 12288M 00:03:34.738 ==> default: -- Memory Backing: hugepages: 00:03:34.738 ==> default: -- Management MAC: 00:03:34.738 ==> default: -- Loader: 00:03:34.738 ==> default: -- Nvram: 00:03:34.738 ==> default: -- Base box: spdk/fedora39 00:03:34.738 ==> default: -- Storage pool: default 00:03:34.738 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732093229_f42fa50d9793b4b22aa5.img (20G) 00:03:34.738 ==> default: -- Volume Cache: default 00:03:34.738 ==> default: -- Kernel: 00:03:34.738 ==> default: -- Initrd: 00:03:34.738 ==> default: -- Graphics Type: vnc 00:03:34.738 ==> default: -- Graphics Port: -1 00:03:34.738 ==> default: -- Graphics IP: 127.0.0.1 00:03:34.738 ==> default: -- Graphics Password: Not defined 00:03:34.738 ==> default: -- Video Type: cirrus 00:03:34.738 ==> default: -- Video VRAM: 9216 00:03:34.738 ==> default: -- Sound Type: 00:03:34.738 ==> default: -- Keymap: en-us 00:03:34.738 ==> default: -- TPM Path: 00:03:34.738 ==> default: -- INPUT: type=mouse, bus=ps2 00:03:34.738 ==> default: -- Command line args: 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:03:34.738 ==> default: -> value=-drive, 00:03:34.738 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:03:34.738 ==> default: -> value=-drive, 00:03:34.738 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:03:34.738 ==> default: -> value=-drive, 00:03:34.738 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:34.738 ==> default: -> value=-drive, 00:03:34.738 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:34.738 ==> default: -> value=-drive, 00:03:34.738 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:03:34.738 ==> default: -> value=-drive, 00:03:34.738 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:03:34.738 ==> default: -> value=-device, 00:03:34.738 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:03:34.997 ==> default: Creating shared folders metadata... 00:03:34.997 ==> default: Starting domain. 00:03:36.901 ==> default: Waiting for domain to get an IP address... 00:03:54.988 ==> default: Waiting for SSH to become available... 00:03:54.988 ==> default: Configuring and enabling network interfaces... 00:03:59.176 default: SSH address: 192.168.121.220:22 00:03:59.176 default: SSH username: vagrant 00:03:59.176 default: SSH auth method: private key 00:04:00.551 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:08.661 ==> default: Mounting SSHFS shared folder... 00:04:09.596 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:09.596 ==> default: Checking Mount.. 00:04:10.972 ==> default: Folder Successfully Mounted! 00:04:10.972 ==> default: Running provisioner: file... 00:04:11.539 default: ~/.gitconfig => .gitconfig 00:04:12.105 00:04:12.105 SUCCESS! 00:04:12.105 00:04:12.105 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:04:12.105 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:12.105 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:04:12.105 00:04:12.114 [Pipeline] } 00:04:12.131 [Pipeline] // stage 00:04:12.140 [Pipeline] dir 00:04:12.141 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:04:12.142 [Pipeline] { 00:04:12.154 [Pipeline] catchError 00:04:12.155 [Pipeline] { 00:04:12.167 [Pipeline] sh 00:04:12.444 + vagrant ssh-config --host vagrant 00:04:12.444 + sed -ne /^Host/,$p 00:04:12.444 + tee ssh_conf 00:04:16.622 Host vagrant 00:04:16.622 HostName 192.168.121.220 00:04:16.622 User vagrant 00:04:16.622 Port 22 00:04:16.622 UserKnownHostsFile /dev/null 00:04:16.622 StrictHostKeyChecking no 00:04:16.622 PasswordAuthentication no 00:04:16.622 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:16.622 IdentitiesOnly yes 00:04:16.622 LogLevel FATAL 00:04:16.622 ForwardAgent yes 00:04:16.622 ForwardX11 yes 00:04:16.622 00:04:16.635 [Pipeline] withEnv 00:04:16.637 [Pipeline] { 00:04:16.651 [Pipeline] sh 00:04:16.929 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:16.929 source /etc/os-release 00:04:16.929 [[ -e /image.version ]] && img=$(< /image.version) 00:04:16.929 # Minimal, systemd-like check. 00:04:16.929 if [[ -e /.dockerenv ]]; then 00:04:16.929 # Clear garbage from the node's name: 00:04:16.929 # agt-er_autotest_547-896 -> autotest_547-896 00:04:16.929 # $HOSTNAME is the actual container id 00:04:16.929 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:16.929 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:16.929 # We can assume this is a mount from a host where container is running, 00:04:16.929 # so fetch its hostname to easily identify the target swarm worker. 00:04:16.929 container="$(< /etc/hostname) ($agent)" 00:04:16.929 else 00:04:16.929 # Fallback 00:04:16.929 container=$agent 00:04:16.929 fi 00:04:16.929 fi 00:04:16.929 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:16.929 00:04:17.196 [Pipeline] } 00:04:17.212 [Pipeline] // withEnv 00:04:17.220 [Pipeline] setCustomBuildProperty 00:04:17.235 [Pipeline] stage 00:04:17.237 [Pipeline] { (Tests) 00:04:17.254 [Pipeline] sh 00:04:17.527 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:17.798 [Pipeline] sh 00:04:18.078 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:18.349 [Pipeline] timeout 00:04:18.349 Timeout set to expire in 50 min 00:04:18.351 [Pipeline] { 00:04:18.365 [Pipeline] sh 00:04:18.644 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:19.210 HEAD is now at a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:04:19.222 [Pipeline] sh 00:04:19.499 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:19.771 [Pipeline] sh 00:04:20.049 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:20.324 [Pipeline] sh 00:04:20.603 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:04:20.861 ++ readlink -f spdk_repo 00:04:20.861 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:20.861 + [[ -n /home/vagrant/spdk_repo ]] 00:04:20.861 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:20.861 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:20.861 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:20.861 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:20.861 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:20.861 + [[ nvme-vg-autotest == pkgdep-* ]] 00:04:20.861 + cd /home/vagrant/spdk_repo 00:04:20.861 + source /etc/os-release 00:04:20.861 ++ NAME='Fedora Linux' 00:04:20.861 ++ VERSION='39 (Cloud Edition)' 00:04:20.861 ++ ID=fedora 00:04:20.861 ++ VERSION_ID=39 00:04:20.861 ++ VERSION_CODENAME= 00:04:20.861 ++ PLATFORM_ID=platform:f39 00:04:20.861 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:20.861 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:20.861 ++ LOGO=fedora-logo-icon 00:04:20.861 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:20.861 ++ HOME_URL=https://fedoraproject.org/ 00:04:20.861 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:20.861 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:20.861 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:20.861 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:20.861 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:20.861 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:20.861 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:20.861 ++ SUPPORT_END=2024-11-12 00:04:20.861 ++ VARIANT='Cloud Edition' 00:04:20.861 ++ VARIANT_ID=cloud 00:04:20.861 + uname -a 00:04:20.861 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:20.861 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:21.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.379 Hugepages 00:04:21.379 node hugesize free / total 00:04:21.379 node0 1048576kB 0 / 0 00:04:21.379 node0 2048kB 0 / 0 00:04:21.379 00:04:21.379 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.379 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:21.638 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:21.638 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:21.638 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:21.638 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:21.638 + rm -f /tmp/spdk-ld-path 00:04:21.638 + source autorun-spdk.conf 00:04:21.638 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:21.638 ++ SPDK_TEST_NVME=1 00:04:21.638 ++ SPDK_TEST_FTL=1 00:04:21.638 ++ SPDK_TEST_ISAL=1 00:04:21.638 ++ SPDK_RUN_ASAN=1 00:04:21.638 ++ SPDK_RUN_UBSAN=1 00:04:21.638 ++ SPDK_TEST_XNVME=1 00:04:21.638 ++ SPDK_TEST_NVME_FDP=1 00:04:21.638 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:21.638 ++ RUN_NIGHTLY=0 00:04:21.638 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:21.638 + [[ -n '' ]] 00:04:21.638 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:21.638 + for M in /var/spdk/build-*-manifest.txt 00:04:21.638 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:21.638 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:21.638 + for M in /var/spdk/build-*-manifest.txt 00:04:21.638 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:21.638 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:21.638 + for M in /var/spdk/build-*-manifest.txt 00:04:21.638 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:21.638 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:21.638 ++ uname 00:04:21.638 + [[ Linux == \L\i\n\u\x ]] 00:04:21.638 + sudo dmesg -T 00:04:21.638 + sudo dmesg --clear 00:04:21.638 + dmesg_pid=5292 00:04:21.638 + [[ Fedora Linux == FreeBSD ]] 00:04:21.638 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:21.638 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:21.638 + sudo dmesg -Tw 00:04:21.638 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:21.638 + [[ -x /usr/src/fio-static/fio ]] 00:04:21.638 + export FIO_BIN=/usr/src/fio-static/fio 00:04:21.638 + FIO_BIN=/usr/src/fio-static/fio 00:04:21.638 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:21.638 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:21.638 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:21.638 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:21.638 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:21.638 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:21.638 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:21.638 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:21.638 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:21.897 09:01:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:21.897 09:01:16 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:21.897 09:01:16 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:04:21.897 09:01:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:21.897 09:01:16 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:21.897 09:01:16 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:21.897 09:01:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.897 09:01:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:21.897 09:01:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:21.897 09:01:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.897 09:01:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.897 09:01:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.897 09:01:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.897 09:01:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.897 09:01:16 -- paths/export.sh@5 -- $ export PATH 00:04:21.897 09:01:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.897 09:01:16 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:21.897 09:01:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:21.897 09:01:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732093276.XXXXXX 00:04:21.897 09:01:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732093276.W0NbsJ 00:04:21.897 09:01:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:21.897 09:01:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:21.897 09:01:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:21.897 09:01:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:21.897 09:01:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:21.897 09:01:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:21.897 09:01:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:21.897 09:01:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:21.897 09:01:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:04:21.897 09:01:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:21.897 09:01:16 -- pm/common@17 -- $ local monitor 00:04:21.897 09:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.897 09:01:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:21.897 09:01:16 -- pm/common@25 -- $ sleep 1 00:04:21.897 09:01:16 -- pm/common@21 -- $ date +%s 00:04:21.897 09:01:16 -- pm/common@21 -- $ date +%s 00:04:21.897 09:01:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732093276 00:04:21.897 09:01:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732093276 00:04:21.897 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732093276_collect-vmstat.pm.log 00:04:21.897 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732093276_collect-cpu-load.pm.log 00:04:22.833 09:01:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:22.833 09:01:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:22.833 09:01:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:22.833 09:01:17 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:22.833 09:01:17 -- spdk/autobuild.sh@16 -- $ date -u 00:04:22.833 Wed Nov 20 09:01:17 AM UTC 2024 00:04:22.833 09:01:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:22.833 v25.01-pre-212-ga5dab6cf7 00:04:22.833 09:01:17 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:22.833 09:01:17 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:22.833 09:01:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:22.833 09:01:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:22.833 09:01:17 -- common/autotest_common.sh@10 -- $ set +x 00:04:22.833 ************************************ 00:04:22.833 START TEST asan 00:04:22.833 ************************************ 00:04:22.833 using asan 00:04:22.833 09:01:17 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:22.833 00:04:22.833 real 0m0.000s 00:04:22.833 user 0m0.000s 00:04:22.833 sys 0m0.000s 00:04:22.833 09:01:17 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:22.833 09:01:17 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:22.833 ************************************ 00:04:22.833 END TEST asan 00:04:22.833 ************************************ 00:04:23.092 09:01:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:23.092 09:01:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:23.092 09:01:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:23.092 09:01:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:23.092 09:01:17 -- common/autotest_common.sh@10 -- $ set +x 00:04:23.092 ************************************ 00:04:23.092 START TEST ubsan 00:04:23.092 ************************************ 00:04:23.092 using ubsan 00:04:23.092 09:01:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:23.092 00:04:23.092 real 0m0.000s 00:04:23.092 user 0m0.000s 00:04:23.092 sys 0m0.000s 00:04:23.092 09:01:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:23.092 09:01:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:23.092 ************************************ 00:04:23.092 END TEST ubsan 00:04:23.092 ************************************ 00:04:23.092 09:01:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:23.092 09:01:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:23.092 09:01:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:23.092 09:01:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:23.092 09:01:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:23.092 09:01:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:23.092 09:01:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:23.092 09:01:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:23.092 09:01:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:04:23.092 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:23.092 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:23.660 Using 'verbs' RDMA provider 00:04:39.575 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:51.777 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:51.778 Creating mk/config.mk...done. 00:04:51.778 Creating mk/cc.flags.mk...done. 00:04:51.778 Type 'make' to build. 00:04:51.778 09:01:46 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:51.778 09:01:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:51.778 09:01:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:51.778 09:01:46 -- common/autotest_common.sh@10 -- $ set +x 00:04:51.778 ************************************ 00:04:51.778 START TEST make 00:04:51.778 ************************************ 00:04:51.778 09:01:46 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:51.778 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:51.778 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:51.778 meson setup builddir \ 00:04:51.778 -Dwith-libaio=enabled \ 00:04:51.778 -Dwith-liburing=enabled \ 00:04:51.778 -Dwith-libvfn=disabled \ 00:04:51.778 -Dwith-spdk=disabled \ 00:04:51.778 -Dexamples=false \ 00:04:51.778 -Dtests=false \ 00:04:51.778 -Dtools=false && \ 00:04:51.778 meson compile -C builddir && \ 00:04:51.778 cd -) 00:04:51.778 make[1]: Nothing to be done for 'all'. 00:04:54.306 The Meson build system 00:04:54.306 Version: 1.5.0 00:04:54.306 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:54.306 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:54.306 Build type: native build 00:04:54.306 Project name: xnvme 00:04:54.306 Project version: 0.7.5 00:04:54.306 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:54.306 C linker for the host machine: cc ld.bfd 2.40-14 00:04:54.306 Host machine cpu family: x86_64 00:04:54.306 Host machine cpu: x86_64 00:04:54.306 Message: host_machine.system: linux 00:04:54.306 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:54.306 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:54.306 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:54.306 Run-time dependency threads found: YES 00:04:54.306 Has header "setupapi.h" : NO 00:04:54.306 Has header "linux/blkzoned.h" : YES 00:04:54.306 Has header "linux/blkzoned.h" : YES (cached) 00:04:54.306 Has header "libaio.h" : YES 00:04:54.306 Library aio found: YES 00:04:54.306 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:54.306 Run-time dependency liburing found: YES 2.2 00:04:54.306 Dependency libvfn skipped: feature with-libvfn disabled 00:04:54.306 Found CMake: /usr/bin/cmake (3.27.7) 00:04:54.306 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:54.306 Subproject spdk : skipped: feature with-spdk disabled 00:04:54.306 Run-time dependency appleframeworks found: NO (tried framework) 00:04:54.306 Run-time dependency appleframeworks found: NO (tried framework) 00:04:54.306 Library rt found: YES 00:04:54.306 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:54.306 Configuring xnvme_config.h using configuration 00:04:54.306 Configuring xnvme.spec using configuration 00:04:54.306 Run-time dependency bash-completion found: YES 2.11 00:04:54.306 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:54.306 Program cp found: YES (/usr/bin/cp) 00:04:54.306 Build targets in project: 3 00:04:54.306 00:04:54.306 xnvme 0.7.5 00:04:54.306 00:04:54.306 Subprojects 00:04:54.306 spdk : NO Feature 'with-spdk' disabled 00:04:54.306 00:04:54.306 User defined options 00:04:54.306 examples : false 00:04:54.306 tests : false 00:04:54.306 tools : false 00:04:54.306 with-libaio : enabled 00:04:54.306 with-liburing: enabled 00:04:54.306 with-libvfn : disabled 00:04:54.306 with-spdk : disabled 00:04:54.306 00:04:54.306 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:54.565 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:54.565 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:54.565 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:54.565 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:54.565 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:54.565 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:54.565 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:54.823 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:54.823 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:54.823 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:54.823 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:54.823 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:54.823 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:54.823 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:54.823 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:54.823 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:54.823 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:54.823 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:54.823 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:54.823 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:54.823 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:54.823 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:54.823 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:55.081 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:55.081 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:55.081 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:55.081 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:55.081 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:55.081 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:55.081 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:55.081 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:55.081 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:55.081 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:55.081 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:55.081 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:55.081 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:55.081 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:55.081 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:55.081 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:55.081 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:55.081 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:55.081 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:55.081 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:55.081 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:55.081 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:55.081 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:55.081 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:55.081 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:55.081 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:55.081 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:55.081 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:55.081 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:55.081 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:55.081 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:55.081 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:55.339 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:55.339 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:55.339 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:55.339 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:55.339 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:55.339 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:55.339 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:55.339 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:55.339 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:55.339 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:55.339 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:55.339 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:55.339 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:55.597 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:55.597 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:55.597 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:55.597 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:55.597 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:55.597 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:55.855 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:55.855 [75/76] Linking static target lib/libxnvme.a 00:04:55.855 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:56.113 INFO: autodetecting backend as ninja 00:04:56.113 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:56.113 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:06.087 The Meson build system 00:05:06.087 Version: 1.5.0 00:05:06.087 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:06.087 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:06.087 Build type: native build 00:05:06.087 Program cat found: YES (/usr/bin/cat) 00:05:06.087 Project name: DPDK 00:05:06.087 Project version: 24.03.0 00:05:06.087 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:06.087 C linker for the host machine: cc ld.bfd 2.40-14 00:05:06.087 Host machine cpu family: x86_64 00:05:06.087 Host machine cpu: x86_64 00:05:06.087 Message: ## Building in Developer Mode ## 00:05:06.087 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:06.087 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:06.087 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:06.087 Program python3 found: YES (/usr/bin/python3) 00:05:06.087 Program cat found: YES (/usr/bin/cat) 00:05:06.087 Compiler for C supports arguments -march=native: YES 00:05:06.087 Checking for size of "void *" : 8 00:05:06.087 Checking for size of "void *" : 8 (cached) 00:05:06.087 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:06.087 Library m found: YES 00:05:06.087 Library numa found: YES 00:05:06.087 Has header "numaif.h" : YES 00:05:06.087 Library fdt found: NO 00:05:06.087 Library execinfo found: NO 00:05:06.087 Has header "execinfo.h" : YES 00:05:06.087 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:06.087 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:06.087 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:06.087 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:06.087 Run-time dependency openssl found: YES 3.1.1 00:05:06.087 Run-time dependency libpcap found: YES 1.10.4 00:05:06.087 Has header "pcap.h" with dependency libpcap: YES 00:05:06.087 Compiler for C supports arguments -Wcast-qual: YES 00:05:06.087 Compiler for C supports arguments -Wdeprecated: YES 00:05:06.087 Compiler for C supports arguments -Wformat: YES 00:05:06.087 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:06.087 Compiler for C supports arguments -Wformat-security: NO 00:05:06.087 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:06.087 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:06.087 Compiler for C supports arguments -Wnested-externs: YES 00:05:06.087 Compiler for C supports arguments -Wold-style-definition: YES 00:05:06.087 Compiler for C supports arguments -Wpointer-arith: YES 00:05:06.087 Compiler for C supports arguments -Wsign-compare: YES 00:05:06.087 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:06.087 Compiler for C supports arguments -Wundef: YES 00:05:06.087 Compiler for C supports arguments -Wwrite-strings: YES 00:05:06.087 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:06.087 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:06.087 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:06.087 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:06.087 Program objdump found: YES (/usr/bin/objdump) 00:05:06.087 Compiler for C supports arguments -mavx512f: YES 00:05:06.087 Checking if "AVX512 checking" compiles: YES 00:05:06.087 Fetching value of define "__SSE4_2__" : 1 00:05:06.087 Fetching value of define "__AES__" : 1 00:05:06.087 Fetching value of define "__AVX__" : 1 00:05:06.087 Fetching value of define "__AVX2__" : 1 00:05:06.087 Fetching value of define "__AVX512BW__" : (undefined) 00:05:06.087 Fetching value of define "__AVX512CD__" : (undefined) 00:05:06.087 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:06.087 Fetching value of define "__AVX512F__" : (undefined) 00:05:06.087 Fetching value of define "__AVX512VL__" : (undefined) 00:05:06.087 Fetching value of define "__PCLMUL__" : 1 00:05:06.087 Fetching value of define "__RDRND__" : 1 00:05:06.087 Fetching value of define "__RDSEED__" : 1 00:05:06.087 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:06.087 Fetching value of define "__znver1__" : (undefined) 00:05:06.087 Fetching value of define "__znver2__" : (undefined) 00:05:06.087 Fetching value of define "__znver3__" : (undefined) 00:05:06.087 Fetching value of define "__znver4__" : (undefined) 00:05:06.087 Library asan found: YES 00:05:06.087 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:06.087 Message: lib/log: Defining dependency "log" 00:05:06.087 Message: lib/kvargs: Defining dependency "kvargs" 00:05:06.087 Message: lib/telemetry: Defining dependency "telemetry" 00:05:06.087 Library rt found: YES 00:05:06.087 Checking for function "getentropy" : NO 00:05:06.087 Message: lib/eal: Defining dependency "eal" 00:05:06.087 Message: lib/ring: Defining dependency "ring" 00:05:06.087 Message: lib/rcu: Defining dependency "rcu" 00:05:06.087 Message: lib/mempool: Defining dependency "mempool" 00:05:06.087 Message: lib/mbuf: Defining dependency "mbuf" 00:05:06.087 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:06.087 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:06.087 Compiler for C supports arguments -mpclmul: YES 00:05:06.087 Compiler for C supports arguments -maes: YES 00:05:06.087 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:06.087 Compiler for C supports arguments -mavx512bw: YES 00:05:06.087 Compiler for C supports arguments -mavx512dq: YES 00:05:06.087 Compiler for C supports arguments -mavx512vl: YES 00:05:06.087 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:06.087 Compiler for C supports arguments -mavx2: YES 00:05:06.087 Compiler for C supports arguments -mavx: YES 00:05:06.087 Message: lib/net: Defining dependency "net" 00:05:06.087 Message: lib/meter: Defining dependency "meter" 00:05:06.087 Message: lib/ethdev: Defining dependency "ethdev" 00:05:06.087 Message: lib/pci: Defining dependency "pci" 00:05:06.087 Message: lib/cmdline: Defining dependency "cmdline" 00:05:06.087 Message: lib/hash: Defining dependency "hash" 00:05:06.088 Message: lib/timer: Defining dependency "timer" 00:05:06.088 Message: lib/compressdev: Defining dependency "compressdev" 00:05:06.088 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:06.088 Message: lib/dmadev: Defining dependency "dmadev" 00:05:06.088 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:06.088 Message: lib/power: Defining dependency "power" 00:05:06.088 Message: lib/reorder: Defining dependency "reorder" 00:05:06.088 Message: lib/security: Defining dependency "security" 00:05:06.088 Has header "linux/userfaultfd.h" : YES 00:05:06.088 Has header "linux/vduse.h" : YES 00:05:06.088 Message: lib/vhost: Defining dependency "vhost" 00:05:06.088 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:06.088 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:06.088 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:06.088 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:06.088 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:06.088 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:06.088 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:06.088 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:06.088 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:06.088 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:06.088 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:06.088 Configuring doxy-api-html.conf using configuration 00:05:06.088 Configuring doxy-api-man.conf using configuration 00:05:06.088 Program mandb found: YES (/usr/bin/mandb) 00:05:06.088 Program sphinx-build found: NO 00:05:06.088 Configuring rte_build_config.h using configuration 00:05:06.088 Message: 00:05:06.088 ================= 00:05:06.088 Applications Enabled 00:05:06.088 ================= 00:05:06.088 00:05:06.088 apps: 00:05:06.088 00:05:06.088 00:05:06.088 Message: 00:05:06.088 ================= 00:05:06.088 Libraries Enabled 00:05:06.088 ================= 00:05:06.088 00:05:06.088 libs: 00:05:06.088 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:06.088 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:06.088 cryptodev, dmadev, power, reorder, security, vhost, 00:05:06.088 00:05:06.088 Message: 00:05:06.088 =============== 00:05:06.088 Drivers Enabled 00:05:06.088 =============== 00:05:06.088 00:05:06.088 common: 00:05:06.088 00:05:06.088 bus: 00:05:06.088 pci, vdev, 00:05:06.088 mempool: 00:05:06.088 ring, 00:05:06.088 dma: 00:05:06.088 00:05:06.088 net: 00:05:06.088 00:05:06.088 crypto: 00:05:06.088 00:05:06.088 compress: 00:05:06.088 00:05:06.088 vdpa: 00:05:06.088 00:05:06.088 00:05:06.088 Message: 00:05:06.088 ================= 00:05:06.088 Content Skipped 00:05:06.088 ================= 00:05:06.088 00:05:06.088 apps: 00:05:06.088 dumpcap: explicitly disabled via build config 00:05:06.088 graph: explicitly disabled via build config 00:05:06.088 pdump: explicitly disabled via build config 00:05:06.088 proc-info: explicitly disabled via build config 00:05:06.088 test-acl: explicitly disabled via build config 00:05:06.088 test-bbdev: explicitly disabled via build config 00:05:06.088 test-cmdline: explicitly disabled via build config 00:05:06.088 test-compress-perf: explicitly disabled via build config 00:05:06.088 test-crypto-perf: explicitly disabled via build config 00:05:06.088 test-dma-perf: explicitly disabled via build config 00:05:06.088 test-eventdev: explicitly disabled via build config 00:05:06.088 test-fib: explicitly disabled via build config 00:05:06.088 test-flow-perf: explicitly disabled via build config 00:05:06.088 test-gpudev: explicitly disabled via build config 00:05:06.088 test-mldev: explicitly disabled via build config 00:05:06.088 test-pipeline: explicitly disabled via build config 00:05:06.088 test-pmd: explicitly disabled via build config 00:05:06.088 test-regex: explicitly disabled via build config 00:05:06.088 test-sad: explicitly disabled via build config 00:05:06.088 test-security-perf: explicitly disabled via build config 00:05:06.088 00:05:06.088 libs: 00:05:06.088 argparse: explicitly disabled via build config 00:05:06.088 metrics: explicitly disabled via build config 00:05:06.088 acl: explicitly disabled via build config 00:05:06.088 bbdev: explicitly disabled via build config 00:05:06.088 bitratestats: explicitly disabled via build config 00:05:06.088 bpf: explicitly disabled via build config 00:05:06.088 cfgfile: explicitly disabled via build config 00:05:06.088 distributor: explicitly disabled via build config 00:05:06.088 efd: explicitly disabled via build config 00:05:06.088 eventdev: explicitly disabled via build config 00:05:06.088 dispatcher: explicitly disabled via build config 00:05:06.088 gpudev: explicitly disabled via build config 00:05:06.088 gro: explicitly disabled via build config 00:05:06.088 gso: explicitly disabled via build config 00:05:06.088 ip_frag: explicitly disabled via build config 00:05:06.088 jobstats: explicitly disabled via build config 00:05:06.088 latencystats: explicitly disabled via build config 00:05:06.088 lpm: explicitly disabled via build config 00:05:06.088 member: explicitly disabled via build config 00:05:06.088 pcapng: explicitly disabled via build config 00:05:06.088 rawdev: explicitly disabled via build config 00:05:06.088 regexdev: explicitly disabled via build config 00:05:06.088 mldev: explicitly disabled via build config 00:05:06.088 rib: explicitly disabled via build config 00:05:06.088 sched: explicitly disabled via build config 00:05:06.088 stack: explicitly disabled via build config 00:05:06.088 ipsec: explicitly disabled via build config 00:05:06.088 pdcp: explicitly disabled via build config 00:05:06.088 fib: explicitly disabled via build config 00:05:06.088 port: explicitly disabled via build config 00:05:06.088 pdump: explicitly disabled via build config 00:05:06.088 table: explicitly disabled via build config 00:05:06.088 pipeline: explicitly disabled via build config 00:05:06.088 graph: explicitly disabled via build config 00:05:06.088 node: explicitly disabled via build config 00:05:06.088 00:05:06.088 drivers: 00:05:06.088 common/cpt: not in enabled drivers build config 00:05:06.088 common/dpaax: not in enabled drivers build config 00:05:06.088 common/iavf: not in enabled drivers build config 00:05:06.088 common/idpf: not in enabled drivers build config 00:05:06.088 common/ionic: not in enabled drivers build config 00:05:06.088 common/mvep: not in enabled drivers build config 00:05:06.088 common/octeontx: not in enabled drivers build config 00:05:06.088 bus/auxiliary: not in enabled drivers build config 00:05:06.088 bus/cdx: not in enabled drivers build config 00:05:06.088 bus/dpaa: not in enabled drivers build config 00:05:06.088 bus/fslmc: not in enabled drivers build config 00:05:06.088 bus/ifpga: not in enabled drivers build config 00:05:06.088 bus/platform: not in enabled drivers build config 00:05:06.088 bus/uacce: not in enabled drivers build config 00:05:06.088 bus/vmbus: not in enabled drivers build config 00:05:06.088 common/cnxk: not in enabled drivers build config 00:05:06.088 common/mlx5: not in enabled drivers build config 00:05:06.088 common/nfp: not in enabled drivers build config 00:05:06.088 common/nitrox: not in enabled drivers build config 00:05:06.088 common/qat: not in enabled drivers build config 00:05:06.088 common/sfc_efx: not in enabled drivers build config 00:05:06.088 mempool/bucket: not in enabled drivers build config 00:05:06.088 mempool/cnxk: not in enabled drivers build config 00:05:06.088 mempool/dpaa: not in enabled drivers build config 00:05:06.088 mempool/dpaa2: not in enabled drivers build config 00:05:06.088 mempool/octeontx: not in enabled drivers build config 00:05:06.088 mempool/stack: not in enabled drivers build config 00:05:06.088 dma/cnxk: not in enabled drivers build config 00:05:06.088 dma/dpaa: not in enabled drivers build config 00:05:06.088 dma/dpaa2: not in enabled drivers build config 00:05:06.088 dma/hisilicon: not in enabled drivers build config 00:05:06.088 dma/idxd: not in enabled drivers build config 00:05:06.088 dma/ioat: not in enabled drivers build config 00:05:06.088 dma/skeleton: not in enabled drivers build config 00:05:06.088 net/af_packet: not in enabled drivers build config 00:05:06.088 net/af_xdp: not in enabled drivers build config 00:05:06.088 net/ark: not in enabled drivers build config 00:05:06.088 net/atlantic: not in enabled drivers build config 00:05:06.088 net/avp: not in enabled drivers build config 00:05:06.088 net/axgbe: not in enabled drivers build config 00:05:06.088 net/bnx2x: not in enabled drivers build config 00:05:06.088 net/bnxt: not in enabled drivers build config 00:05:06.088 net/bonding: not in enabled drivers build config 00:05:06.088 net/cnxk: not in enabled drivers build config 00:05:06.088 net/cpfl: not in enabled drivers build config 00:05:06.088 net/cxgbe: not in enabled drivers build config 00:05:06.088 net/dpaa: not in enabled drivers build config 00:05:06.088 net/dpaa2: not in enabled drivers build config 00:05:06.088 net/e1000: not in enabled drivers build config 00:05:06.088 net/ena: not in enabled drivers build config 00:05:06.088 net/enetc: not in enabled drivers build config 00:05:06.088 net/enetfec: not in enabled drivers build config 00:05:06.088 net/enic: not in enabled drivers build config 00:05:06.088 net/failsafe: not in enabled drivers build config 00:05:06.088 net/fm10k: not in enabled drivers build config 00:05:06.088 net/gve: not in enabled drivers build config 00:05:06.088 net/hinic: not in enabled drivers build config 00:05:06.088 net/hns3: not in enabled drivers build config 00:05:06.088 net/i40e: not in enabled drivers build config 00:05:06.088 net/iavf: not in enabled drivers build config 00:05:06.088 net/ice: not in enabled drivers build config 00:05:06.088 net/idpf: not in enabled drivers build config 00:05:06.088 net/igc: not in enabled drivers build config 00:05:06.088 net/ionic: not in enabled drivers build config 00:05:06.088 net/ipn3ke: not in enabled drivers build config 00:05:06.088 net/ixgbe: not in enabled drivers build config 00:05:06.088 net/mana: not in enabled drivers build config 00:05:06.088 net/memif: not in enabled drivers build config 00:05:06.088 net/mlx4: not in enabled drivers build config 00:05:06.089 net/mlx5: not in enabled drivers build config 00:05:06.089 net/mvneta: not in enabled drivers build config 00:05:06.089 net/mvpp2: not in enabled drivers build config 00:05:06.089 net/netvsc: not in enabled drivers build config 00:05:06.089 net/nfb: not in enabled drivers build config 00:05:06.089 net/nfp: not in enabled drivers build config 00:05:06.089 net/ngbe: not in enabled drivers build config 00:05:06.089 net/null: not in enabled drivers build config 00:05:06.089 net/octeontx: not in enabled drivers build config 00:05:06.089 net/octeon_ep: not in enabled drivers build config 00:05:06.089 net/pcap: not in enabled drivers build config 00:05:06.089 net/pfe: not in enabled drivers build config 00:05:06.089 net/qede: not in enabled drivers build config 00:05:06.089 net/ring: not in enabled drivers build config 00:05:06.089 net/sfc: not in enabled drivers build config 00:05:06.089 net/softnic: not in enabled drivers build config 00:05:06.089 net/tap: not in enabled drivers build config 00:05:06.089 net/thunderx: not in enabled drivers build config 00:05:06.089 net/txgbe: not in enabled drivers build config 00:05:06.089 net/vdev_netvsc: not in enabled drivers build config 00:05:06.089 net/vhost: not in enabled drivers build config 00:05:06.089 net/virtio: not in enabled drivers build config 00:05:06.089 net/vmxnet3: not in enabled drivers build config 00:05:06.089 raw/*: missing internal dependency, "rawdev" 00:05:06.089 crypto/armv8: not in enabled drivers build config 00:05:06.089 crypto/bcmfs: not in enabled drivers build config 00:05:06.089 crypto/caam_jr: not in enabled drivers build config 00:05:06.089 crypto/ccp: not in enabled drivers build config 00:05:06.089 crypto/cnxk: not in enabled drivers build config 00:05:06.089 crypto/dpaa_sec: not in enabled drivers build config 00:05:06.089 crypto/dpaa2_sec: not in enabled drivers build config 00:05:06.089 crypto/ipsec_mb: not in enabled drivers build config 00:05:06.089 crypto/mlx5: not in enabled drivers build config 00:05:06.089 crypto/mvsam: not in enabled drivers build config 00:05:06.089 crypto/nitrox: not in enabled drivers build config 00:05:06.089 crypto/null: not in enabled drivers build config 00:05:06.089 crypto/octeontx: not in enabled drivers build config 00:05:06.089 crypto/openssl: not in enabled drivers build config 00:05:06.089 crypto/scheduler: not in enabled drivers build config 00:05:06.089 crypto/uadk: not in enabled drivers build config 00:05:06.089 crypto/virtio: not in enabled drivers build config 00:05:06.089 compress/isal: not in enabled drivers build config 00:05:06.089 compress/mlx5: not in enabled drivers build config 00:05:06.089 compress/nitrox: not in enabled drivers build config 00:05:06.089 compress/octeontx: not in enabled drivers build config 00:05:06.089 compress/zlib: not in enabled drivers build config 00:05:06.089 regex/*: missing internal dependency, "regexdev" 00:05:06.089 ml/*: missing internal dependency, "mldev" 00:05:06.089 vdpa/ifc: not in enabled drivers build config 00:05:06.089 vdpa/mlx5: not in enabled drivers build config 00:05:06.089 vdpa/nfp: not in enabled drivers build config 00:05:06.089 vdpa/sfc: not in enabled drivers build config 00:05:06.089 event/*: missing internal dependency, "eventdev" 00:05:06.089 baseband/*: missing internal dependency, "bbdev" 00:05:06.089 gpu/*: missing internal dependency, "gpudev" 00:05:06.089 00:05:06.089 00:05:06.089 Build targets in project: 85 00:05:06.089 00:05:06.089 DPDK 24.03.0 00:05:06.089 00:05:06.089 User defined options 00:05:06.089 buildtype : debug 00:05:06.089 default_library : shared 00:05:06.089 libdir : lib 00:05:06.089 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:06.089 b_sanitize : address 00:05:06.089 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:06.089 c_link_args : 00:05:06.089 cpu_instruction_set: native 00:05:06.089 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:06.089 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:06.089 enable_docs : false 00:05:06.089 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:06.089 enable_kmods : false 00:05:06.089 max_lcores : 128 00:05:06.089 tests : false 00:05:06.089 00:05:06.089 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:06.349 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:06.349 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:06.349 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:06.349 [3/268] Linking static target lib/librte_kvargs.a 00:05:06.607 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:06.607 [5/268] Linking static target lib/librte_log.a 00:05:06.607 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:06.866 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.125 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:07.125 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:07.125 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:07.384 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:07.384 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:07.384 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:07.384 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:07.384 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:07.384 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:07.642 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.642 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:07.642 [19/268] Linking static target lib/librte_telemetry.a 00:05:07.642 [20/268] Linking target lib/librte_log.so.24.1 00:05:07.901 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:08.159 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:08.159 [23/268] Linking target lib/librte_kvargs.so.24.1 00:05:08.159 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:08.418 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:08.418 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:08.418 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:08.418 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:08.418 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:08.418 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:08.677 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.677 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:08.677 [33/268] Linking target lib/librte_telemetry.so.24.1 00:05:08.677 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:08.936 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:08.936 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:09.195 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:09.195 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:09.195 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:09.453 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:09.453 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:09.453 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:09.453 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:09.453 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:09.453 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:09.713 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:09.975 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:09.975 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:10.233 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:10.233 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:10.233 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:10.233 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:10.490 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:10.490 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:10.490 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:10.490 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:10.748 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:10.748 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:10.748 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:11.007 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:11.007 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:11.007 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:11.007 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:11.265 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:11.265 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:11.265 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:11.523 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:11.523 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:11.523 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:11.781 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:11.781 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:11.781 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:11.781 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:11.781 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:11.781 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:11.781 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:12.040 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:12.040 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:12.040 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:12.300 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:12.300 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:12.559 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:12.559 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:12.816 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:12.816 [85/268] Linking static target lib/librte_ring.a 00:05:12.816 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:12.816 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:12.816 [88/268] Linking static target lib/librte_eal.a 00:05:12.816 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:12.817 [90/268] Linking static target lib/librte_rcu.a 00:05:13.074 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:13.333 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:13.333 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:13.333 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.333 [95/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.333 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:13.591 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:13.591 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:13.591 [99/268] Linking static target lib/librte_mempool.a 00:05:13.591 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:13.591 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:14.158 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:14.158 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:14.158 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:14.158 [105/268] Linking static target lib/librte_mbuf.a 00:05:14.158 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:14.158 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:14.158 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:14.158 [109/268] Linking static target lib/librte_net.a 00:05:14.416 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:14.416 [111/268] Linking static target lib/librte_meter.a 00:05:14.675 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:14.675 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.675 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:14.675 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.934 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.934 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:14.934 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:15.192 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.451 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:15.709 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:15.709 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:15.967 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:16.224 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:16.224 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:16.225 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:16.225 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:16.483 [128/268] Linking static target lib/librte_pci.a 00:05:16.483 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:16.483 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:16.483 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:16.742 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:16.742 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:16.742 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.742 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:16.742 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:16.742 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:17.000 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:17.000 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:17.000 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:17.000 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:17.000 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:17.000 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:17.000 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:17.000 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:17.259 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:17.259 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:17.259 [148/268] Linking static target lib/librte_cmdline.a 00:05:17.824 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:17.824 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:17.824 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:17.824 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:17.824 [153/268] Linking static target lib/librte_timer.a 00:05:17.824 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:17.824 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:18.083 [156/268] Linking static target lib/librte_ethdev.a 00:05:18.083 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:18.650 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:18.650 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:18.650 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:18.650 [161/268] Linking static target lib/librte_hash.a 00:05:18.650 [162/268] Linking static target lib/librte_compressdev.a 00:05:18.650 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.650 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:18.909 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:18.909 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:19.167 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:19.167 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:19.167 [169/268] Linking static target lib/librte_dmadev.a 00:05:19.167 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.167 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:19.426 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:19.685 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:19.685 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.944 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:19.944 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:19.944 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:20.203 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:20.203 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.203 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:20.203 [181/268] Linking static target lib/librte_cryptodev.a 00:05:20.203 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:20.203 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:20.203 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:20.773 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:20.773 [186/268] Linking static target lib/librte_power.a 00:05:21.032 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:21.032 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:21.032 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:21.032 [190/268] Linking static target lib/librte_reorder.a 00:05:21.290 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:21.290 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:21.290 [193/268] Linking static target lib/librte_security.a 00:05:21.549 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:21.808 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.066 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.066 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.324 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:22.324 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:22.324 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:22.583 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:22.583 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:22.583 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:22.841 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:23.100 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:23.100 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:23.358 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:23.617 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:23.617 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:23.617 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:23.617 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:23.617 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:23.617 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:23.876 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:23.876 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:23.876 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:23.876 [217/268] Linking static target drivers/librte_bus_vdev.a 00:05:23.876 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:23.876 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:23.876 [220/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:23.876 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:23.876 [222/268] Linking static target drivers/librte_bus_pci.a 00:05:23.876 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:23.876 [224/268] Linking static target drivers/librte_mempool_ring.a 00:05:23.876 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:24.135 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:24.393 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.326 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:25.326 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.326 [230/268] Linking target lib/librte_eal.so.24.1 00:05:25.583 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:25.583 [232/268] Linking target lib/librte_pci.so.24.1 00:05:25.583 [233/268] Linking target lib/librte_meter.so.24.1 00:05:25.583 [234/268] Linking target lib/librte_ring.so.24.1 00:05:25.584 [235/268] Linking target lib/librte_timer.so.24.1 00:05:25.584 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:25.584 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:25.584 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:25.584 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:25.841 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:25.841 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:25.841 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:25.841 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:25.841 [244/268] Linking target lib/librte_mempool.so.24.1 00:05:25.841 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:25.841 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:25.841 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:26.099 [248/268] Linking target lib/librte_mbuf.so.24.1 00:05:26.099 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:26.099 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:26.099 [251/268] Linking target lib/librte_compressdev.so.24.1 00:05:26.099 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:26.099 [253/268] Linking target lib/librte_net.so.24.1 00:05:26.099 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:26.356 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:26.356 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:26.356 [257/268] Linking target lib/librte_hash.so.24.1 00:05:26.356 [258/268] Linking target lib/librte_security.so.24.1 00:05:26.356 [259/268] Linking target lib/librte_cmdline.so.24.1 00:05:26.614 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:26.614 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:26.614 [262/268] Linking target lib/librte_ethdev.so.24.1 00:05:26.871 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:26.871 [264/268] Linking target lib/librte_power.so.24.1 00:05:29.427 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:29.427 [266/268] Linking static target lib/librte_vhost.a 00:05:31.328 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:31.328 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:31.328 INFO: autodetecting backend as ninja 00:05:31.328 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:53.257 CC lib/ut/ut.o 00:05:53.257 CC lib/log/log.o 00:05:53.257 CC lib/log/log_flags.o 00:05:53.257 CC lib/log/log_deprecated.o 00:05:53.257 CC lib/ut_mock/mock.o 00:05:53.257 LIB libspdk_ut.a 00:05:53.257 LIB libspdk_ut_mock.a 00:05:53.257 LIB libspdk_log.a 00:05:53.257 SO libspdk_ut.so.2.0 00:05:53.257 SO libspdk_ut_mock.so.6.0 00:05:53.257 SO libspdk_log.so.7.1 00:05:53.257 SYMLINK libspdk_ut.so 00:05:53.257 SYMLINK libspdk_ut_mock.so 00:05:53.257 SYMLINK libspdk_log.so 00:05:53.257 CC lib/util/base64.o 00:05:53.257 CC lib/util/bit_array.o 00:05:53.257 CC lib/util/cpuset.o 00:05:53.257 CC lib/util/crc16.o 00:05:53.257 CC lib/util/crc32.o 00:05:53.257 CC lib/util/crc32c.o 00:05:53.257 CC lib/dma/dma.o 00:05:53.257 CC lib/ioat/ioat.o 00:05:53.257 CXX lib/trace_parser/trace.o 00:05:53.257 CC lib/vfio_user/host/vfio_user_pci.o 00:05:53.257 CC lib/util/crc32_ieee.o 00:05:53.257 CC lib/util/crc64.o 00:05:53.257 CC lib/util/dif.o 00:05:53.257 CC lib/util/fd.o 00:05:53.257 CC lib/util/fd_group.o 00:05:53.257 LIB libspdk_dma.a 00:05:53.257 CC lib/vfio_user/host/vfio_user.o 00:05:53.257 CC lib/util/file.o 00:05:53.257 SO libspdk_dma.so.5.0 00:05:53.257 CC lib/util/hexlify.o 00:05:53.257 SYMLINK libspdk_dma.so 00:05:53.257 CC lib/util/iov.o 00:05:53.257 LIB libspdk_ioat.a 00:05:53.257 CC lib/util/math.o 00:05:53.257 SO libspdk_ioat.so.7.0 00:05:53.257 CC lib/util/net.o 00:05:53.257 SYMLINK libspdk_ioat.so 00:05:53.257 CC lib/util/pipe.o 00:05:53.257 CC lib/util/strerror_tls.o 00:05:53.257 CC lib/util/string.o 00:05:53.257 LIB libspdk_vfio_user.a 00:05:53.257 SO libspdk_vfio_user.so.5.0 00:05:53.257 CC lib/util/uuid.o 00:05:53.257 CC lib/util/xor.o 00:05:53.515 SYMLINK libspdk_vfio_user.so 00:05:53.515 CC lib/util/zipf.o 00:05:53.515 CC lib/util/md5.o 00:05:53.779 LIB libspdk_util.a 00:05:53.779 SO libspdk_util.so.10.1 00:05:54.042 SYMLINK libspdk_util.so 00:05:54.042 LIB libspdk_trace_parser.a 00:05:54.042 SO libspdk_trace_parser.so.6.0 00:05:54.300 CC lib/conf/conf.o 00:05:54.300 CC lib/env_dpdk/env.o 00:05:54.300 CC lib/idxd/idxd.o 00:05:54.300 CC lib/idxd/idxd_user.o 00:05:54.300 CC lib/rdma_utils/rdma_utils.o 00:05:54.300 CC lib/idxd/idxd_kernel.o 00:05:54.300 CC lib/env_dpdk/memory.o 00:05:54.300 CC lib/json/json_parse.o 00:05:54.300 CC lib/vmd/vmd.o 00:05:54.300 SYMLINK libspdk_trace_parser.so 00:05:54.300 CC lib/json/json_util.o 00:05:54.300 CC lib/env_dpdk/pci.o 00:05:54.559 LIB libspdk_conf.a 00:05:54.559 SO libspdk_conf.so.6.0 00:05:54.559 LIB libspdk_rdma_utils.a 00:05:54.559 CC lib/env_dpdk/init.o 00:05:54.559 CC lib/env_dpdk/threads.o 00:05:54.559 SO libspdk_rdma_utils.so.1.0 00:05:54.559 CC lib/json/json_write.o 00:05:54.559 SYMLINK libspdk_conf.so 00:05:54.559 CC lib/vmd/led.o 00:05:54.559 SYMLINK libspdk_rdma_utils.so 00:05:54.559 CC lib/env_dpdk/pci_ioat.o 00:05:54.817 CC lib/env_dpdk/pci_virtio.o 00:05:54.817 CC lib/env_dpdk/pci_vmd.o 00:05:54.817 CC lib/env_dpdk/pci_idxd.o 00:05:54.817 CC lib/env_dpdk/pci_event.o 00:05:55.076 LIB libspdk_json.a 00:05:55.076 CC lib/rdma_provider/common.o 00:05:55.076 CC lib/env_dpdk/sigbus_handler.o 00:05:55.076 SO libspdk_json.so.6.0 00:05:55.076 CC lib/env_dpdk/pci_dpdk.o 00:05:55.076 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:55.076 SYMLINK libspdk_json.so 00:05:55.076 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:55.076 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:55.076 LIB libspdk_idxd.a 00:05:55.076 SO libspdk_idxd.so.12.1 00:05:55.336 LIB libspdk_vmd.a 00:05:55.336 SYMLINK libspdk_idxd.so 00:05:55.336 SO libspdk_vmd.so.6.0 00:05:55.336 LIB libspdk_rdma_provider.a 00:05:55.336 SYMLINK libspdk_vmd.so 00:05:55.336 SO libspdk_rdma_provider.so.7.0 00:05:55.336 CC lib/jsonrpc/jsonrpc_server.o 00:05:55.336 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:55.336 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:55.336 CC lib/jsonrpc/jsonrpc_client.o 00:05:55.336 SYMLINK libspdk_rdma_provider.so 00:05:55.595 LIB libspdk_jsonrpc.a 00:05:55.854 SO libspdk_jsonrpc.so.6.0 00:05:55.854 SYMLINK libspdk_jsonrpc.so 00:05:56.113 CC lib/rpc/rpc.o 00:05:56.113 LIB libspdk_env_dpdk.a 00:05:56.372 SO libspdk_env_dpdk.so.15.1 00:05:56.372 LIB libspdk_rpc.a 00:05:56.372 SYMLINK libspdk_env_dpdk.so 00:05:56.372 SO libspdk_rpc.so.6.0 00:05:56.631 SYMLINK libspdk_rpc.so 00:05:56.631 CC lib/notify/notify.o 00:05:56.631 CC lib/keyring/keyring.o 00:05:56.631 CC lib/keyring/keyring_rpc.o 00:05:56.631 CC lib/notify/notify_rpc.o 00:05:56.631 CC lib/trace/trace_rpc.o 00:05:56.631 CC lib/trace/trace_flags.o 00:05:56.631 CC lib/trace/trace.o 00:05:56.889 LIB libspdk_notify.a 00:05:56.889 SO libspdk_notify.so.6.0 00:05:57.148 SYMLINK libspdk_notify.so 00:05:57.148 LIB libspdk_keyring.a 00:05:57.148 SO libspdk_keyring.so.2.0 00:05:57.148 LIB libspdk_trace.a 00:05:57.148 SO libspdk_trace.so.11.0 00:05:57.148 SYMLINK libspdk_keyring.so 00:05:57.406 SYMLINK libspdk_trace.so 00:05:57.665 CC lib/thread/thread.o 00:05:57.665 CC lib/thread/iobuf.o 00:05:57.665 CC lib/sock/sock.o 00:05:57.665 CC lib/sock/sock_rpc.o 00:05:58.232 LIB libspdk_sock.a 00:05:58.232 SO libspdk_sock.so.10.0 00:05:58.232 SYMLINK libspdk_sock.so 00:05:58.490 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:58.490 CC lib/nvme/nvme_ns_cmd.o 00:05:58.490 CC lib/nvme/nvme_ctrlr.o 00:05:58.490 CC lib/nvme/nvme_fabric.o 00:05:58.490 CC lib/nvme/nvme_ns.o 00:05:58.490 CC lib/nvme/nvme_pcie.o 00:05:58.490 CC lib/nvme/nvme_pcie_common.o 00:05:58.490 CC lib/nvme/nvme_qpair.o 00:05:58.490 CC lib/nvme/nvme.o 00:05:59.425 CC lib/nvme/nvme_quirks.o 00:05:59.425 CC lib/nvme/nvme_transport.o 00:05:59.425 CC lib/nvme/nvme_discovery.o 00:05:59.425 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:59.684 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:59.684 CC lib/nvme/nvme_tcp.o 00:05:59.684 LIB libspdk_thread.a 00:05:59.943 SO libspdk_thread.so.11.0 00:05:59.943 CC lib/nvme/nvme_opal.o 00:05:59.943 CC lib/nvme/nvme_io_msg.o 00:05:59.943 SYMLINK libspdk_thread.so 00:05:59.943 CC lib/nvme/nvme_poll_group.o 00:05:59.943 CC lib/nvme/nvme_zns.o 00:06:00.203 CC lib/nvme/nvme_stubs.o 00:06:00.203 CC lib/nvme/nvme_auth.o 00:06:00.462 CC lib/nvme/nvme_cuse.o 00:06:00.462 CC lib/nvme/nvme_rdma.o 00:06:00.721 CC lib/accel/accel.o 00:06:00.721 CC lib/blob/blobstore.o 00:06:00.721 CC lib/blob/request.o 00:06:00.721 CC lib/blob/zeroes.o 00:06:00.980 CC lib/init/json_config.o 00:06:00.980 CC lib/blob/blob_bs_dev.o 00:06:01.238 CC lib/init/subsystem.o 00:06:01.238 CC lib/accel/accel_rpc.o 00:06:01.238 CC lib/accel/accel_sw.o 00:06:01.496 CC lib/init/subsystem_rpc.o 00:06:01.496 CC lib/init/rpc.o 00:06:01.496 CC lib/virtio/virtio.o 00:06:01.496 CC lib/virtio/virtio_vhost_user.o 00:06:01.496 CC lib/fsdev/fsdev.o 00:06:01.755 CC lib/fsdev/fsdev_io.o 00:06:01.755 CC lib/virtio/virtio_vfio_user.o 00:06:01.755 LIB libspdk_init.a 00:06:01.755 CC lib/fsdev/fsdev_rpc.o 00:06:01.755 SO libspdk_init.so.6.0 00:06:01.755 SYMLINK libspdk_init.so 00:06:01.755 CC lib/virtio/virtio_pci.o 00:06:02.014 CC lib/event/reactor.o 00:06:02.014 CC lib/event/app_rpc.o 00:06:02.014 CC lib/event/app.o 00:06:02.014 CC lib/event/log_rpc.o 00:06:02.014 CC lib/event/scheduler_static.o 00:06:02.014 LIB libspdk_nvme.a 00:06:02.277 LIB libspdk_accel.a 00:06:02.277 SO libspdk_accel.so.16.0 00:06:02.277 LIB libspdk_virtio.a 00:06:02.277 SO libspdk_virtio.so.7.0 00:06:02.277 SYMLINK libspdk_accel.so 00:06:02.277 SO libspdk_nvme.so.15.0 00:06:02.277 LIB libspdk_fsdev.a 00:06:02.542 SYMLINK libspdk_virtio.so 00:06:02.542 SO libspdk_fsdev.so.2.0 00:06:02.542 SYMLINK libspdk_fsdev.so 00:06:02.542 CC lib/bdev/bdev.o 00:06:02.542 CC lib/bdev/bdev_rpc.o 00:06:02.542 CC lib/bdev/bdev_zone.o 00:06:02.542 CC lib/bdev/scsi_nvme.o 00:06:02.542 CC lib/bdev/part.o 00:06:02.542 LIB libspdk_event.a 00:06:02.542 SYMLINK libspdk_nvme.so 00:06:02.801 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:02.801 SO libspdk_event.so.14.0 00:06:02.801 SYMLINK libspdk_event.so 00:06:03.737 LIB libspdk_fuse_dispatcher.a 00:06:03.737 SO libspdk_fuse_dispatcher.so.1.0 00:06:03.737 SYMLINK libspdk_fuse_dispatcher.so 00:06:05.113 LIB libspdk_blob.a 00:06:05.371 SO libspdk_blob.so.11.0 00:06:05.371 SYMLINK libspdk_blob.so 00:06:05.630 CC lib/lvol/lvol.o 00:06:05.630 CC lib/blobfs/blobfs.o 00:06:05.630 CC lib/blobfs/tree.o 00:06:06.567 LIB libspdk_bdev.a 00:06:06.567 SO libspdk_bdev.so.17.0 00:06:06.567 SYMLINK libspdk_bdev.so 00:06:06.826 CC lib/nvmf/ctrlr.o 00:06:06.826 CC lib/nvmf/ctrlr_discovery.o 00:06:06.826 CC lib/nvmf/ctrlr_bdev.o 00:06:06.826 CC lib/nvmf/subsystem.o 00:06:06.826 CC lib/scsi/dev.o 00:06:06.826 CC lib/ftl/ftl_core.o 00:06:06.826 CC lib/nbd/nbd.o 00:06:06.826 CC lib/ublk/ublk.o 00:06:06.826 LIB libspdk_blobfs.a 00:06:06.826 SO libspdk_blobfs.so.10.0 00:06:07.085 LIB libspdk_lvol.a 00:06:07.085 SYMLINK libspdk_blobfs.so 00:06:07.085 CC lib/ublk/ublk_rpc.o 00:06:07.085 SO libspdk_lvol.so.10.0 00:06:07.085 SYMLINK libspdk_lvol.so 00:06:07.085 CC lib/nvmf/nvmf.o 00:06:07.085 CC lib/scsi/lun.o 00:06:07.085 CC lib/nvmf/nvmf_rpc.o 00:06:07.344 CC lib/ftl/ftl_init.o 00:06:07.344 CC lib/nbd/nbd_rpc.o 00:06:07.603 CC lib/ftl/ftl_layout.o 00:06:07.603 CC lib/scsi/port.o 00:06:07.603 CC lib/scsi/scsi.o 00:06:07.603 LIB libspdk_nbd.a 00:06:07.603 SO libspdk_nbd.so.7.0 00:06:07.603 LIB libspdk_ublk.a 00:06:07.603 CC lib/nvmf/transport.o 00:06:07.603 SO libspdk_ublk.so.3.0 00:06:07.862 SYMLINK libspdk_nbd.so 00:06:07.862 CC lib/ftl/ftl_debug.o 00:06:07.862 CC lib/nvmf/tcp.o 00:06:07.862 CC lib/scsi/scsi_bdev.o 00:06:07.862 SYMLINK libspdk_ublk.so 00:06:07.862 CC lib/scsi/scsi_pr.o 00:06:07.862 CC lib/ftl/ftl_io.o 00:06:08.121 CC lib/ftl/ftl_sb.o 00:06:08.121 CC lib/scsi/scsi_rpc.o 00:06:08.121 CC lib/nvmf/stubs.o 00:06:08.121 CC lib/nvmf/mdns_server.o 00:06:08.121 CC lib/ftl/ftl_l2p.o 00:06:08.379 CC lib/nvmf/rdma.o 00:06:08.379 CC lib/scsi/task.o 00:06:08.379 CC lib/ftl/ftl_l2p_flat.o 00:06:08.379 CC lib/nvmf/auth.o 00:06:08.379 CC lib/ftl/ftl_nv_cache.o 00:06:08.638 LIB libspdk_scsi.a 00:06:08.638 CC lib/ftl/ftl_band.o 00:06:08.638 CC lib/ftl/ftl_band_ops.o 00:06:08.638 SO libspdk_scsi.so.9.0 00:06:08.897 CC lib/ftl/ftl_writer.o 00:06:08.897 SYMLINK libspdk_scsi.so 00:06:08.897 CC lib/ftl/ftl_rq.o 00:06:09.156 CC lib/iscsi/conn.o 00:06:09.156 CC lib/iscsi/init_grp.o 00:06:09.156 CC lib/ftl/ftl_reloc.o 00:06:09.156 CC lib/ftl/ftl_l2p_cache.o 00:06:09.156 CC lib/iscsi/iscsi.o 00:06:09.156 CC lib/iscsi/param.o 00:06:09.415 CC lib/iscsi/portal_grp.o 00:06:09.415 CC lib/iscsi/tgt_node.o 00:06:09.674 CC lib/iscsi/iscsi_subsystem.o 00:06:09.674 CC lib/vhost/vhost.o 00:06:09.674 CC lib/vhost/vhost_rpc.o 00:06:09.674 CC lib/iscsi/iscsi_rpc.o 00:06:09.674 CC lib/ftl/ftl_p2l.o 00:06:09.674 CC lib/ftl/ftl_p2l_log.o 00:06:09.932 CC lib/ftl/mngt/ftl_mngt.o 00:06:09.932 CC lib/iscsi/task.o 00:06:10.191 CC lib/vhost/vhost_scsi.o 00:06:10.191 CC lib/vhost/vhost_blk.o 00:06:10.191 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:10.191 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:10.191 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:10.191 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:10.449 CC lib/vhost/rte_vhost_user.o 00:06:10.449 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:10.449 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:10.449 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:10.707 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:10.707 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:10.707 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:10.707 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:10.707 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:10.965 CC lib/ftl/utils/ftl_conf.o 00:06:10.965 LIB libspdk_iscsi.a 00:06:10.965 CC lib/ftl/utils/ftl_md.o 00:06:10.965 CC lib/ftl/utils/ftl_mempool.o 00:06:10.965 CC lib/ftl/utils/ftl_bitmap.o 00:06:10.965 SO libspdk_iscsi.so.8.0 00:06:11.224 LIB libspdk_nvmf.a 00:06:11.224 CC lib/ftl/utils/ftl_property.o 00:06:11.224 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:11.224 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:11.224 SYMLINK libspdk_iscsi.so 00:06:11.224 SO libspdk_nvmf.so.20.0 00:06:11.224 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:11.224 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:11.224 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:11.538 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:11.538 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:11.538 SYMLINK libspdk_nvmf.so 00:06:11.538 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:11.538 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:11.538 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:11.538 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:11.538 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:11.538 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:11.538 CC lib/ftl/base/ftl_base_dev.o 00:06:11.797 LIB libspdk_vhost.a 00:06:11.797 CC lib/ftl/base/ftl_base_bdev.o 00:06:11.797 CC lib/ftl/ftl_trace.o 00:06:11.797 SO libspdk_vhost.so.8.0 00:06:11.797 SYMLINK libspdk_vhost.so 00:06:12.056 LIB libspdk_ftl.a 00:06:12.315 SO libspdk_ftl.so.9.0 00:06:12.573 SYMLINK libspdk_ftl.so 00:06:12.831 CC module/env_dpdk/env_dpdk_rpc.o 00:06:13.090 CC module/fsdev/aio/fsdev_aio.o 00:06:13.090 CC module/sock/posix/posix.o 00:06:13.090 CC module/accel/error/accel_error.o 00:06:13.090 CC module/accel/ioat/accel_ioat.o 00:06:13.090 CC module/blob/bdev/blob_bdev.o 00:06:13.090 CC module/keyring/file/keyring.o 00:06:13.090 CC module/accel/iaa/accel_iaa.o 00:06:13.090 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:13.090 CC module/accel/dsa/accel_dsa.o 00:06:13.090 LIB libspdk_env_dpdk_rpc.a 00:06:13.090 SO libspdk_env_dpdk_rpc.so.6.0 00:06:13.090 SYMLINK libspdk_env_dpdk_rpc.so 00:06:13.090 CC module/accel/ioat/accel_ioat_rpc.o 00:06:13.090 CC module/keyring/file/keyring_rpc.o 00:06:13.090 CC module/accel/dsa/accel_dsa_rpc.o 00:06:13.348 CC module/accel/error/accel_error_rpc.o 00:06:13.348 CC module/accel/iaa/accel_iaa_rpc.o 00:06:13.348 LIB libspdk_accel_ioat.a 00:06:13.348 LIB libspdk_scheduler_dynamic.a 00:06:13.348 SO libspdk_scheduler_dynamic.so.4.0 00:06:13.348 SO libspdk_accel_ioat.so.6.0 00:06:13.348 LIB libspdk_blob_bdev.a 00:06:13.348 LIB libspdk_keyring_file.a 00:06:13.348 SO libspdk_keyring_file.so.2.0 00:06:13.348 LIB libspdk_accel_dsa.a 00:06:13.348 SYMLINK libspdk_accel_ioat.so 00:06:13.349 SYMLINK libspdk_scheduler_dynamic.so 00:06:13.349 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:13.349 SO libspdk_blob_bdev.so.11.0 00:06:13.349 LIB libspdk_accel_iaa.a 00:06:13.349 LIB libspdk_accel_error.a 00:06:13.349 CC module/fsdev/aio/linux_aio_mgr.o 00:06:13.349 SYMLINK libspdk_keyring_file.so 00:06:13.349 SO libspdk_accel_dsa.so.5.0 00:06:13.349 SO libspdk_accel_iaa.so.3.0 00:06:13.349 SYMLINK libspdk_blob_bdev.so 00:06:13.349 SO libspdk_accel_error.so.2.0 00:06:13.607 SYMLINK libspdk_accel_iaa.so 00:06:13.607 SYMLINK libspdk_accel_dsa.so 00:06:13.607 SYMLINK libspdk_accel_error.so 00:06:13.607 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:13.607 CC module/keyring/linux/keyring.o 00:06:13.607 CC module/keyring/linux/keyring_rpc.o 00:06:13.607 CC module/scheduler/gscheduler/gscheduler.o 00:06:13.865 CC module/bdev/delay/vbdev_delay.o 00:06:13.865 CC module/bdev/error/vbdev_error.o 00:06:13.865 CC module/bdev/gpt/gpt.o 00:06:13.865 LIB libspdk_scheduler_dpdk_governor.a 00:06:13.865 CC module/bdev/gpt/vbdev_gpt.o 00:06:13.865 LIB libspdk_keyring_linux.a 00:06:13.865 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:13.865 CC module/blobfs/bdev/blobfs_bdev.o 00:06:13.865 SO libspdk_keyring_linux.so.1.0 00:06:13.865 LIB libspdk_fsdev_aio.a 00:06:13.865 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:13.865 LIB libspdk_scheduler_gscheduler.a 00:06:13.865 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:13.865 SYMLINK libspdk_keyring_linux.so 00:06:13.865 SO libspdk_fsdev_aio.so.1.0 00:06:13.865 SO libspdk_scheduler_gscheduler.so.4.0 00:06:13.865 LIB libspdk_sock_posix.a 00:06:13.865 SYMLINK libspdk_scheduler_gscheduler.so 00:06:13.865 SO libspdk_sock_posix.so.6.0 00:06:14.124 SYMLINK libspdk_fsdev_aio.so 00:06:14.124 CC module/bdev/lvol/vbdev_lvol.o 00:06:14.124 LIB libspdk_blobfs_bdev.a 00:06:14.124 SYMLINK libspdk_sock_posix.so 00:06:14.124 CC module/bdev/error/vbdev_error_rpc.o 00:06:14.124 LIB libspdk_bdev_gpt.a 00:06:14.124 SO libspdk_blobfs_bdev.so.6.0 00:06:14.124 SO libspdk_bdev_gpt.so.6.0 00:06:14.124 CC module/bdev/malloc/bdev_malloc.o 00:06:14.124 CC module/bdev/nvme/bdev_nvme.o 00:06:14.124 CC module/bdev/null/bdev_null.o 00:06:14.124 CC module/bdev/passthru/vbdev_passthru.o 00:06:14.124 SYMLINK libspdk_blobfs_bdev.so 00:06:14.124 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:14.124 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:14.124 SYMLINK libspdk_bdev_gpt.so 00:06:14.124 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:14.382 CC module/bdev/raid/bdev_raid.o 00:06:14.382 LIB libspdk_bdev_error.a 00:06:14.382 SO libspdk_bdev_error.so.6.0 00:06:14.382 CC module/bdev/raid/bdev_raid_rpc.o 00:06:14.382 SYMLINK libspdk_bdev_error.so 00:06:14.382 CC module/bdev/raid/bdev_raid_sb.o 00:06:14.382 LIB libspdk_bdev_delay.a 00:06:14.382 SO libspdk_bdev_delay.so.6.0 00:06:14.382 CC module/bdev/null/bdev_null_rpc.o 00:06:14.640 SYMLINK libspdk_bdev_delay.so 00:06:14.640 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:14.640 LIB libspdk_bdev_passthru.a 00:06:14.641 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:14.641 SO libspdk_bdev_passthru.so.6.0 00:06:14.641 CC module/bdev/nvme/nvme_rpc.o 00:06:14.641 LIB libspdk_bdev_null.a 00:06:14.641 SYMLINK libspdk_bdev_passthru.so 00:06:14.641 CC module/bdev/nvme/bdev_mdns_client.o 00:06:14.899 CC module/bdev/raid/raid0.o 00:06:14.899 SO libspdk_bdev_null.so.6.0 00:06:14.899 LIB libspdk_bdev_malloc.a 00:06:14.899 SO libspdk_bdev_malloc.so.6.0 00:06:14.899 SYMLINK libspdk_bdev_null.so 00:06:14.899 SYMLINK libspdk_bdev_malloc.so 00:06:14.899 CC module/bdev/nvme/vbdev_opal.o 00:06:14.899 CC module/bdev/split/vbdev_split.o 00:06:14.899 CC module/bdev/split/vbdev_split_rpc.o 00:06:14.899 LIB libspdk_bdev_lvol.a 00:06:15.157 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:15.157 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:15.157 SO libspdk_bdev_lvol.so.6.0 00:06:15.157 CC module/bdev/xnvme/bdev_xnvme.o 00:06:15.157 SYMLINK libspdk_bdev_lvol.so 00:06:15.157 LIB libspdk_bdev_split.a 00:06:15.157 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:15.157 SO libspdk_bdev_split.so.6.0 00:06:15.157 CC module/bdev/aio/bdev_aio.o 00:06:15.415 CC module/bdev/aio/bdev_aio_rpc.o 00:06:15.415 CC module/bdev/ftl/bdev_ftl.o 00:06:15.415 SYMLINK libspdk_bdev_split.so 00:06:15.415 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:15.415 CC module/bdev/iscsi/bdev_iscsi.o 00:06:15.415 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:15.415 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:15.415 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:15.674 CC module/bdev/raid/raid1.o 00:06:15.674 CC module/bdev/raid/concat.o 00:06:15.674 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:15.674 LIB libspdk_bdev_zone_block.a 00:06:15.674 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:15.674 LIB libspdk_bdev_ftl.a 00:06:15.674 LIB libspdk_bdev_xnvme.a 00:06:15.674 SO libspdk_bdev_zone_block.so.6.0 00:06:15.674 SO libspdk_bdev_ftl.so.6.0 00:06:15.674 LIB libspdk_bdev_aio.a 00:06:15.674 SO libspdk_bdev_xnvme.so.3.0 00:06:15.674 SO libspdk_bdev_aio.so.6.0 00:06:15.674 SYMLINK libspdk_bdev_zone_block.so 00:06:15.674 SYMLINK libspdk_bdev_ftl.so 00:06:15.674 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:15.674 SYMLINK libspdk_bdev_xnvme.so 00:06:15.674 LIB libspdk_bdev_iscsi.a 00:06:15.674 SYMLINK libspdk_bdev_aio.so 00:06:15.932 SO libspdk_bdev_iscsi.so.6.0 00:06:15.932 SYMLINK libspdk_bdev_iscsi.so 00:06:15.932 LIB libspdk_bdev_raid.a 00:06:15.932 SO libspdk_bdev_raid.so.6.0 00:06:16.191 SYMLINK libspdk_bdev_raid.so 00:06:16.191 LIB libspdk_bdev_virtio.a 00:06:16.191 SO libspdk_bdev_virtio.so.6.0 00:06:16.450 SYMLINK libspdk_bdev_virtio.so 00:06:17.826 LIB libspdk_bdev_nvme.a 00:06:17.826 SO libspdk_bdev_nvme.so.7.1 00:06:17.826 SYMLINK libspdk_bdev_nvme.so 00:06:18.392 CC module/event/subsystems/sock/sock.o 00:06:18.392 CC module/event/subsystems/scheduler/scheduler.o 00:06:18.392 CC module/event/subsystems/iobuf/iobuf.o 00:06:18.392 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:18.392 CC module/event/subsystems/keyring/keyring.o 00:06:18.392 CC module/event/subsystems/vmd/vmd.o 00:06:18.392 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:18.392 CC module/event/subsystems/fsdev/fsdev.o 00:06:18.392 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:18.651 LIB libspdk_event_scheduler.a 00:06:18.651 LIB libspdk_event_keyring.a 00:06:18.651 SO libspdk_event_scheduler.so.4.0 00:06:18.651 LIB libspdk_event_sock.a 00:06:18.651 LIB libspdk_event_fsdev.a 00:06:18.651 LIB libspdk_event_vhost_blk.a 00:06:18.651 LIB libspdk_event_iobuf.a 00:06:18.651 LIB libspdk_event_vmd.a 00:06:18.651 SO libspdk_event_keyring.so.1.0 00:06:18.651 SO libspdk_event_sock.so.5.0 00:06:18.651 SO libspdk_event_fsdev.so.1.0 00:06:18.651 SO libspdk_event_vhost_blk.so.3.0 00:06:18.651 SO libspdk_event_vmd.so.6.0 00:06:18.651 SO libspdk_event_iobuf.so.3.0 00:06:18.651 SYMLINK libspdk_event_scheduler.so 00:06:18.651 SYMLINK libspdk_event_keyring.so 00:06:18.651 SYMLINK libspdk_event_fsdev.so 00:06:18.651 SYMLINK libspdk_event_vhost_blk.so 00:06:18.651 SYMLINK libspdk_event_sock.so 00:06:18.651 SYMLINK libspdk_event_vmd.so 00:06:18.651 SYMLINK libspdk_event_iobuf.so 00:06:18.909 CC module/event/subsystems/accel/accel.o 00:06:19.168 LIB libspdk_event_accel.a 00:06:19.168 SO libspdk_event_accel.so.6.0 00:06:19.168 SYMLINK libspdk_event_accel.so 00:06:19.427 CC module/event/subsystems/bdev/bdev.o 00:06:19.686 LIB libspdk_event_bdev.a 00:06:19.945 SO libspdk_event_bdev.so.6.0 00:06:19.945 SYMLINK libspdk_event_bdev.so 00:06:20.204 CC module/event/subsystems/ublk/ublk.o 00:06:20.204 CC module/event/subsystems/nbd/nbd.o 00:06:20.204 CC module/event/subsystems/scsi/scsi.o 00:06:20.204 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:20.204 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:20.204 LIB libspdk_event_ublk.a 00:06:20.204 LIB libspdk_event_nbd.a 00:06:20.463 LIB libspdk_event_scsi.a 00:06:20.463 SO libspdk_event_nbd.so.6.0 00:06:20.463 SO libspdk_event_ublk.so.3.0 00:06:20.463 SO libspdk_event_scsi.so.6.0 00:06:20.463 SYMLINK libspdk_event_ublk.so 00:06:20.463 SYMLINK libspdk_event_nbd.so 00:06:20.463 SYMLINK libspdk_event_scsi.so 00:06:20.463 LIB libspdk_event_nvmf.a 00:06:20.463 SO libspdk_event_nvmf.so.6.0 00:06:20.463 SYMLINK libspdk_event_nvmf.so 00:06:20.722 CC module/event/subsystems/iscsi/iscsi.o 00:06:20.722 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:20.981 LIB libspdk_event_vhost_scsi.a 00:06:20.981 LIB libspdk_event_iscsi.a 00:06:20.981 SO libspdk_event_vhost_scsi.so.3.0 00:06:20.981 SO libspdk_event_iscsi.so.6.0 00:06:20.981 SYMLINK libspdk_event_vhost_scsi.so 00:06:20.981 SYMLINK libspdk_event_iscsi.so 00:06:21.240 SO libspdk.so.6.0 00:06:21.240 SYMLINK libspdk.so 00:06:21.501 CC app/trace_record/trace_record.o 00:06:21.501 CXX app/trace/trace.o 00:06:21.501 TEST_HEADER include/spdk/accel.h 00:06:21.501 TEST_HEADER include/spdk/accel_module.h 00:06:21.501 TEST_HEADER include/spdk/assert.h 00:06:21.501 TEST_HEADER include/spdk/barrier.h 00:06:21.501 TEST_HEADER include/spdk/base64.h 00:06:21.501 TEST_HEADER include/spdk/bdev.h 00:06:21.501 TEST_HEADER include/spdk/bdev_module.h 00:06:21.501 TEST_HEADER include/spdk/bdev_zone.h 00:06:21.501 TEST_HEADER include/spdk/bit_array.h 00:06:21.501 TEST_HEADER include/spdk/bit_pool.h 00:06:21.501 TEST_HEADER include/spdk/blob_bdev.h 00:06:21.502 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:21.502 TEST_HEADER include/spdk/blobfs.h 00:06:21.502 TEST_HEADER include/spdk/blob.h 00:06:21.502 TEST_HEADER include/spdk/conf.h 00:06:21.502 TEST_HEADER include/spdk/config.h 00:06:21.502 TEST_HEADER include/spdk/cpuset.h 00:06:21.502 TEST_HEADER include/spdk/crc16.h 00:06:21.502 TEST_HEADER include/spdk/crc32.h 00:06:21.502 TEST_HEADER include/spdk/crc64.h 00:06:21.502 TEST_HEADER include/spdk/dif.h 00:06:21.502 TEST_HEADER include/spdk/dma.h 00:06:21.502 CC app/nvmf_tgt/nvmf_main.o 00:06:21.502 TEST_HEADER include/spdk/endian.h 00:06:21.502 TEST_HEADER include/spdk/env_dpdk.h 00:06:21.502 TEST_HEADER include/spdk/env.h 00:06:21.502 TEST_HEADER include/spdk/event.h 00:06:21.502 TEST_HEADER include/spdk/fd_group.h 00:06:21.502 TEST_HEADER include/spdk/fd.h 00:06:21.502 TEST_HEADER include/spdk/file.h 00:06:21.502 TEST_HEADER include/spdk/fsdev.h 00:06:21.502 CC app/iscsi_tgt/iscsi_tgt.o 00:06:21.502 TEST_HEADER include/spdk/fsdev_module.h 00:06:21.502 TEST_HEADER include/spdk/ftl.h 00:06:21.502 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:21.502 TEST_HEADER include/spdk/gpt_spec.h 00:06:21.502 TEST_HEADER include/spdk/hexlify.h 00:06:21.502 TEST_HEADER include/spdk/histogram_data.h 00:06:21.502 TEST_HEADER include/spdk/idxd.h 00:06:21.502 TEST_HEADER include/spdk/idxd_spec.h 00:06:21.502 TEST_HEADER include/spdk/init.h 00:06:21.502 CC app/spdk_tgt/spdk_tgt.o 00:06:21.502 TEST_HEADER include/spdk/ioat.h 00:06:21.502 TEST_HEADER include/spdk/ioat_spec.h 00:06:21.502 TEST_HEADER include/spdk/iscsi_spec.h 00:06:21.502 TEST_HEADER include/spdk/json.h 00:06:21.502 TEST_HEADER include/spdk/jsonrpc.h 00:06:21.502 TEST_HEADER include/spdk/keyring.h 00:06:21.502 CC test/thread/poller_perf/poller_perf.o 00:06:21.502 TEST_HEADER include/spdk/keyring_module.h 00:06:21.502 CC examples/util/zipf/zipf.o 00:06:21.502 TEST_HEADER include/spdk/likely.h 00:06:21.502 TEST_HEADER include/spdk/log.h 00:06:21.502 TEST_HEADER include/spdk/lvol.h 00:06:21.502 TEST_HEADER include/spdk/md5.h 00:06:21.502 TEST_HEADER include/spdk/memory.h 00:06:21.502 TEST_HEADER include/spdk/mmio.h 00:06:21.502 TEST_HEADER include/spdk/nbd.h 00:06:21.502 TEST_HEADER include/spdk/net.h 00:06:21.502 TEST_HEADER include/spdk/notify.h 00:06:21.502 TEST_HEADER include/spdk/nvme.h 00:06:21.502 TEST_HEADER include/spdk/nvme_intel.h 00:06:21.502 CC test/dma/test_dma/test_dma.o 00:06:21.502 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:21.502 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:21.502 TEST_HEADER include/spdk/nvme_spec.h 00:06:21.502 TEST_HEADER include/spdk/nvme_zns.h 00:06:21.502 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:21.502 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:21.502 TEST_HEADER include/spdk/nvmf.h 00:06:21.502 TEST_HEADER include/spdk/nvmf_spec.h 00:06:21.502 TEST_HEADER include/spdk/nvmf_transport.h 00:06:21.502 TEST_HEADER include/spdk/opal.h 00:06:21.502 TEST_HEADER include/spdk/opal_spec.h 00:06:21.502 CC test/app/bdev_svc/bdev_svc.o 00:06:21.502 TEST_HEADER include/spdk/pci_ids.h 00:06:21.502 TEST_HEADER include/spdk/pipe.h 00:06:21.764 TEST_HEADER include/spdk/queue.h 00:06:21.764 TEST_HEADER include/spdk/reduce.h 00:06:21.764 TEST_HEADER include/spdk/rpc.h 00:06:21.764 TEST_HEADER include/spdk/scheduler.h 00:06:21.764 TEST_HEADER include/spdk/scsi.h 00:06:21.764 TEST_HEADER include/spdk/scsi_spec.h 00:06:21.764 TEST_HEADER include/spdk/sock.h 00:06:21.764 TEST_HEADER include/spdk/stdinc.h 00:06:21.764 TEST_HEADER include/spdk/string.h 00:06:21.764 TEST_HEADER include/spdk/thread.h 00:06:21.764 TEST_HEADER include/spdk/trace.h 00:06:21.764 TEST_HEADER include/spdk/trace_parser.h 00:06:21.764 TEST_HEADER include/spdk/tree.h 00:06:21.764 TEST_HEADER include/spdk/ublk.h 00:06:21.764 TEST_HEADER include/spdk/util.h 00:06:21.764 TEST_HEADER include/spdk/uuid.h 00:06:21.764 TEST_HEADER include/spdk/version.h 00:06:21.764 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:21.764 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:21.764 TEST_HEADER include/spdk/vhost.h 00:06:21.764 TEST_HEADER include/spdk/vmd.h 00:06:21.764 TEST_HEADER include/spdk/xor.h 00:06:21.764 TEST_HEADER include/spdk/zipf.h 00:06:21.764 CXX test/cpp_headers/accel.o 00:06:21.764 LINK nvmf_tgt 00:06:21.764 LINK zipf 00:06:21.764 LINK iscsi_tgt 00:06:21.764 LINK poller_perf 00:06:21.764 LINK spdk_trace_record 00:06:21.764 LINK spdk_tgt 00:06:21.764 CXX test/cpp_headers/accel_module.o 00:06:21.764 LINK bdev_svc 00:06:22.024 CXX test/cpp_headers/assert.o 00:06:22.024 LINK spdk_trace 00:06:22.024 CC examples/ioat/perf/perf.o 00:06:22.024 CXX test/cpp_headers/barrier.o 00:06:22.283 CC app/spdk_lspci/spdk_lspci.o 00:06:22.283 CXX test/cpp_headers/base64.o 00:06:22.283 CC examples/vmd/lsvmd/lsvmd.o 00:06:22.283 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:22.283 CC examples/idxd/perf/perf.o 00:06:22.283 LINK test_dma 00:06:22.283 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:22.283 CC test/env/mem_callbacks/mem_callbacks.o 00:06:22.283 LINK lsvmd 00:06:22.283 LINK spdk_lspci 00:06:22.283 CXX test/cpp_headers/bdev.o 00:06:22.542 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:22.542 LINK ioat_perf 00:06:22.542 LINK interrupt_tgt 00:06:22.542 CC app/spdk_nvme_perf/perf.o 00:06:22.542 CXX test/cpp_headers/bdev_module.o 00:06:22.542 CC examples/vmd/led/led.o 00:06:22.542 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:22.542 LINK idxd_perf 00:06:22.800 CC examples/ioat/verify/verify.o 00:06:22.800 CC test/env/vtophys/vtophys.o 00:06:22.800 LINK led 00:06:22.800 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:22.800 CXX test/cpp_headers/bdev_zone.o 00:06:22.800 LINK nvme_fuzz 00:06:22.800 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:23.059 CXX test/cpp_headers/bit_array.o 00:06:23.059 LINK vtophys 00:06:23.059 LINK verify 00:06:23.059 CXX test/cpp_headers/bit_pool.o 00:06:23.059 LINK mem_callbacks 00:06:23.059 CC test/env/memory/memory_ut.o 00:06:23.059 CXX test/cpp_headers/blob_bdev.o 00:06:23.059 LINK env_dpdk_post_init 00:06:23.318 CC app/spdk_nvme_identify/identify.o 00:06:23.318 CC app/spdk_nvme_discover/discovery_aer.o 00:06:23.318 CXX test/cpp_headers/blobfs_bdev.o 00:06:23.318 LINK vhost_fuzz 00:06:23.318 CC examples/thread/thread/thread_ex.o 00:06:23.318 CC examples/sock/hello_world/hello_sock.o 00:06:23.318 CC app/spdk_top/spdk_top.o 00:06:23.577 CXX test/cpp_headers/blobfs.o 00:06:23.577 LINK spdk_nvme_discover 00:06:23.577 CC test/app/histogram_perf/histogram_perf.o 00:06:23.577 LINK hello_sock 00:06:23.577 CXX test/cpp_headers/blob.o 00:06:23.577 LINK spdk_nvme_perf 00:06:23.577 LINK thread 00:06:23.835 LINK histogram_perf 00:06:23.835 CC test/env/pci/pci_ut.o 00:06:23.835 CXX test/cpp_headers/conf.o 00:06:23.835 CXX test/cpp_headers/config.o 00:06:24.094 CC test/event/event_perf/event_perf.o 00:06:24.094 CC app/vhost/vhost.o 00:06:24.094 CXX test/cpp_headers/cpuset.o 00:06:24.094 CC app/spdk_dd/spdk_dd.o 00:06:24.094 CC examples/nvme/hello_world/hello_world.o 00:06:24.094 LINK event_perf 00:06:24.352 CXX test/cpp_headers/crc16.o 00:06:24.352 LINK vhost 00:06:24.352 LINK pci_ut 00:06:24.352 LINK spdk_nvme_identify 00:06:24.352 LINK hello_world 00:06:24.352 CC test/event/reactor/reactor.o 00:06:24.352 CXX test/cpp_headers/crc32.o 00:06:24.609 LINK spdk_dd 00:06:24.609 LINK spdk_top 00:06:24.609 CC test/event/reactor_perf/reactor_perf.o 00:06:24.609 LINK memory_ut 00:06:24.609 LINK reactor 00:06:24.609 CC test/event/app_repeat/app_repeat.o 00:06:24.609 CXX test/cpp_headers/crc64.o 00:06:24.609 CC examples/nvme/reconnect/reconnect.o 00:06:24.609 LINK reactor_perf 00:06:24.609 CC test/event/scheduler/scheduler.o 00:06:24.609 LINK iscsi_fuzz 00:06:24.870 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:24.870 LINK app_repeat 00:06:24.871 CXX test/cpp_headers/dif.o 00:06:24.871 CC test/app/jsoncat/jsoncat.o 00:06:24.871 CC examples/nvme/arbitration/arbitration.o 00:06:24.871 CC app/fio/nvme/fio_plugin.o 00:06:24.871 LINK scheduler 00:06:24.871 CC app/fio/bdev/fio_plugin.o 00:06:25.128 LINK jsoncat 00:06:25.128 CC test/app/stub/stub.o 00:06:25.128 CXX test/cpp_headers/dma.o 00:06:25.128 LINK reconnect 00:06:25.128 CC test/nvme/aer/aer.o 00:06:25.128 CXX test/cpp_headers/endian.o 00:06:25.387 LINK stub 00:06:25.387 CC test/nvme/sgl/sgl.o 00:06:25.387 CC test/nvme/reset/reset.o 00:06:25.387 LINK arbitration 00:06:25.387 CC test/nvme/e2edp/nvme_dp.o 00:06:25.387 CXX test/cpp_headers/env_dpdk.o 00:06:25.387 LINK nvme_manage 00:06:25.387 LINK aer 00:06:25.645 CC test/rpc_client/rpc_client_test.o 00:06:25.645 LINK spdk_bdev 00:06:25.645 LINK reset 00:06:25.645 LINK sgl 00:06:25.645 CXX test/cpp_headers/env.o 00:06:25.645 LINK spdk_nvme 00:06:25.645 CC examples/nvme/hotplug/hotplug.o 00:06:25.645 CC test/accel/dif/dif.o 00:06:25.645 LINK nvme_dp 00:06:25.645 LINK rpc_client_test 00:06:25.645 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:25.904 CC examples/nvme/abort/abort.o 00:06:25.904 CXX test/cpp_headers/event.o 00:06:25.904 CC test/nvme/overhead/overhead.o 00:06:25.904 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:25.904 LINK hotplug 00:06:25.904 CC test/nvme/err_injection/err_injection.o 00:06:25.904 CC test/blobfs/mkfs/mkfs.o 00:06:25.904 LINK cmb_copy 00:06:26.163 CXX test/cpp_headers/fd_group.o 00:06:26.163 LINK pmr_persistence 00:06:26.163 CXX test/cpp_headers/fd.o 00:06:26.163 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:26.163 CXX test/cpp_headers/file.o 00:06:26.163 LINK err_injection 00:06:26.163 LINK overhead 00:06:26.163 LINK mkfs 00:06:26.163 LINK abort 00:06:26.422 CC test/nvme/startup/startup.o 00:06:26.422 CXX test/cpp_headers/fsdev.o 00:06:26.422 CC examples/accel/perf/accel_perf.o 00:06:26.422 CXX test/cpp_headers/fsdev_module.o 00:06:26.422 CC test/nvme/reserve/reserve.o 00:06:26.422 LINK hello_fsdev 00:06:26.422 CC test/nvme/simple_copy/simple_copy.o 00:06:26.422 LINK startup 00:06:26.680 CC examples/blob/hello_world/hello_blob.o 00:06:26.680 CC examples/blob/cli/blobcli.o 00:06:26.680 LINK dif 00:06:26.680 CXX test/cpp_headers/ftl.o 00:06:26.680 CXX test/cpp_headers/fuse_dispatcher.o 00:06:26.680 LINK reserve 00:06:26.680 LINK simple_copy 00:06:26.680 CC test/nvme/connect_stress/connect_stress.o 00:06:26.939 LINK hello_blob 00:06:26.939 CC test/lvol/esnap/esnap.o 00:06:26.939 CXX test/cpp_headers/gpt_spec.o 00:06:26.939 CXX test/cpp_headers/hexlify.o 00:06:26.939 CC test/nvme/boot_partition/boot_partition.o 00:06:26.939 CC test/nvme/compliance/nvme_compliance.o 00:06:26.939 LINK connect_stress 00:06:26.939 CC test/nvme/fused_ordering/fused_ordering.o 00:06:27.198 CXX test/cpp_headers/histogram_data.o 00:06:27.198 LINK accel_perf 00:06:27.198 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:27.198 LINK boot_partition 00:06:27.198 CC test/nvme/fdp/fdp.o 00:06:27.198 LINK blobcli 00:06:27.198 CC test/nvme/cuse/cuse.o 00:06:27.198 CXX test/cpp_headers/idxd.o 00:06:27.198 CXX test/cpp_headers/idxd_spec.o 00:06:27.198 LINK fused_ordering 00:06:27.198 CXX test/cpp_headers/init.o 00:06:27.456 LINK doorbell_aers 00:06:27.456 LINK nvme_compliance 00:06:27.456 CXX test/cpp_headers/ioat.o 00:06:27.456 CXX test/cpp_headers/ioat_spec.o 00:06:27.456 CXX test/cpp_headers/iscsi_spec.o 00:06:27.456 CXX test/cpp_headers/json.o 00:06:27.456 CC examples/bdev/hello_world/hello_bdev.o 00:06:27.715 CC examples/bdev/bdevperf/bdevperf.o 00:06:27.715 LINK fdp 00:06:27.715 CXX test/cpp_headers/jsonrpc.o 00:06:27.715 CXX test/cpp_headers/keyring.o 00:06:27.715 CXX test/cpp_headers/keyring_module.o 00:06:27.715 CC test/bdev/bdevio/bdevio.o 00:06:27.715 CXX test/cpp_headers/likely.o 00:06:27.715 CXX test/cpp_headers/log.o 00:06:27.715 CXX test/cpp_headers/lvol.o 00:06:27.715 CXX test/cpp_headers/md5.o 00:06:27.715 CXX test/cpp_headers/memory.o 00:06:27.715 LINK hello_bdev 00:06:27.974 CXX test/cpp_headers/mmio.o 00:06:27.974 CXX test/cpp_headers/nbd.o 00:06:27.974 CXX test/cpp_headers/net.o 00:06:27.974 CXX test/cpp_headers/notify.o 00:06:27.974 CXX test/cpp_headers/nvme.o 00:06:27.974 CXX test/cpp_headers/nvme_intel.o 00:06:27.974 CXX test/cpp_headers/nvme_ocssd.o 00:06:27.974 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:28.233 LINK bdevio 00:06:28.233 CXX test/cpp_headers/nvme_spec.o 00:06:28.233 CXX test/cpp_headers/nvme_zns.o 00:06:28.233 CXX test/cpp_headers/nvmf_cmd.o 00:06:28.233 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:28.233 CXX test/cpp_headers/nvmf.o 00:06:28.491 CXX test/cpp_headers/nvmf_spec.o 00:06:28.491 CXX test/cpp_headers/nvmf_transport.o 00:06:28.491 CXX test/cpp_headers/opal.o 00:06:28.491 CXX test/cpp_headers/opal_spec.o 00:06:28.491 CXX test/cpp_headers/pci_ids.o 00:06:28.491 CXX test/cpp_headers/pipe.o 00:06:28.491 CXX test/cpp_headers/queue.o 00:06:28.491 CXX test/cpp_headers/reduce.o 00:06:28.491 CXX test/cpp_headers/rpc.o 00:06:28.750 LINK bdevperf 00:06:28.750 CXX test/cpp_headers/scheduler.o 00:06:28.750 CXX test/cpp_headers/scsi.o 00:06:28.750 CXX test/cpp_headers/scsi_spec.o 00:06:28.750 CXX test/cpp_headers/sock.o 00:06:28.750 CXX test/cpp_headers/stdinc.o 00:06:28.750 CXX test/cpp_headers/string.o 00:06:28.750 CXX test/cpp_headers/thread.o 00:06:28.750 CXX test/cpp_headers/trace.o 00:06:28.750 CXX test/cpp_headers/trace_parser.o 00:06:28.750 CXX test/cpp_headers/tree.o 00:06:28.750 CXX test/cpp_headers/ublk.o 00:06:28.750 LINK cuse 00:06:28.750 CXX test/cpp_headers/util.o 00:06:29.008 CXX test/cpp_headers/uuid.o 00:06:29.008 CXX test/cpp_headers/version.o 00:06:29.009 CXX test/cpp_headers/vfio_user_pci.o 00:06:29.009 CXX test/cpp_headers/vfio_user_spec.o 00:06:29.009 CXX test/cpp_headers/vhost.o 00:06:29.009 CXX test/cpp_headers/vmd.o 00:06:29.009 CXX test/cpp_headers/xor.o 00:06:29.009 CC examples/nvmf/nvmf/nvmf.o 00:06:29.009 CXX test/cpp_headers/zipf.o 00:06:29.576 LINK nvmf 00:06:33.808 LINK esnap 00:06:34.067 00:06:34.067 real 1m42.801s 00:06:34.067 user 9m20.812s 00:06:34.067 sys 1m51.252s 00:06:34.067 09:03:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:34.067 09:03:28 make -- common/autotest_common.sh@10 -- $ set +x 00:06:34.067 ************************************ 00:06:34.067 END TEST make 00:06:34.067 ************************************ 00:06:34.067 09:03:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:34.067 09:03:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:34.067 09:03:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:34.067 09:03:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:34.067 09:03:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:34.067 09:03:29 -- pm/common@44 -- $ pid=5335 00:06:34.067 09:03:29 -- pm/common@50 -- $ kill -TERM 5335 00:06:34.067 09:03:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:34.067 09:03:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:34.067 09:03:29 -- pm/common@44 -- $ pid=5336 00:06:34.067 09:03:29 -- pm/common@50 -- $ kill -TERM 5336 00:06:34.067 09:03:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:34.067 09:03:29 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:34.067 09:03:29 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.067 09:03:29 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.067 09:03:29 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.327 09:03:29 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.327 09:03:29 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.327 09:03:29 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.327 09:03:29 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.327 09:03:29 -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.327 09:03:29 -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.327 09:03:29 -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.327 09:03:29 -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.327 09:03:29 -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.327 09:03:29 -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.327 09:03:29 -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.327 09:03:29 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.327 09:03:29 -- scripts/common.sh@344 -- # case "$op" in 00:06:34.327 09:03:29 -- scripts/common.sh@345 -- # : 1 00:06:34.327 09:03:29 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.327 09:03:29 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.327 09:03:29 -- scripts/common.sh@365 -- # decimal 1 00:06:34.327 09:03:29 -- scripts/common.sh@353 -- # local d=1 00:06:34.327 09:03:29 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.327 09:03:29 -- scripts/common.sh@355 -- # echo 1 00:06:34.327 09:03:29 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.327 09:03:29 -- scripts/common.sh@366 -- # decimal 2 00:06:34.327 09:03:29 -- scripts/common.sh@353 -- # local d=2 00:06:34.327 09:03:29 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.327 09:03:29 -- scripts/common.sh@355 -- # echo 2 00:06:34.327 09:03:29 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.327 09:03:29 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.327 09:03:29 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.327 09:03:29 -- scripts/common.sh@368 -- # return 0 00:06:34.327 09:03:29 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.327 09:03:29 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.327 --rc genhtml_branch_coverage=1 00:06:34.327 --rc genhtml_function_coverage=1 00:06:34.327 --rc genhtml_legend=1 00:06:34.327 --rc geninfo_all_blocks=1 00:06:34.327 --rc geninfo_unexecuted_blocks=1 00:06:34.327 00:06:34.327 ' 00:06:34.327 09:03:29 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.327 --rc genhtml_branch_coverage=1 00:06:34.327 --rc genhtml_function_coverage=1 00:06:34.327 --rc genhtml_legend=1 00:06:34.327 --rc geninfo_all_blocks=1 00:06:34.327 --rc geninfo_unexecuted_blocks=1 00:06:34.327 00:06:34.327 ' 00:06:34.327 09:03:29 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.327 --rc genhtml_branch_coverage=1 00:06:34.327 --rc genhtml_function_coverage=1 00:06:34.327 --rc genhtml_legend=1 00:06:34.327 --rc geninfo_all_blocks=1 00:06:34.327 --rc geninfo_unexecuted_blocks=1 00:06:34.327 00:06:34.327 ' 00:06:34.327 09:03:29 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.327 --rc genhtml_branch_coverage=1 00:06:34.327 --rc genhtml_function_coverage=1 00:06:34.327 --rc genhtml_legend=1 00:06:34.327 --rc geninfo_all_blocks=1 00:06:34.327 --rc geninfo_unexecuted_blocks=1 00:06:34.327 00:06:34.327 ' 00:06:34.327 09:03:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:34.327 09:03:29 -- nvmf/common.sh@7 -- # uname -s 00:06:34.327 09:03:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.327 09:03:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.327 09:03:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.327 09:03:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.327 09:03:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.327 09:03:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.327 09:03:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.327 09:03:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.327 09:03:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.327 09:03:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.327 09:03:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f672431-7bc3-4680-b192-759d7bcf00f3 00:06:34.327 09:03:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=4f672431-7bc3-4680-b192-759d7bcf00f3 00:06:34.327 09:03:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.328 09:03:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.328 09:03:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:34.328 09:03:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.328 09:03:29 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:34.328 09:03:29 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.328 09:03:29 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.328 09:03:29 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.328 09:03:29 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.328 09:03:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.328 09:03:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.328 09:03:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.328 09:03:29 -- paths/export.sh@5 -- # export PATH 00:06:34.328 09:03:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.328 09:03:29 -- nvmf/common.sh@51 -- # : 0 00:06:34.328 09:03:29 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.328 09:03:29 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.328 09:03:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.328 09:03:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.328 09:03:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.328 09:03:29 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.328 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.328 09:03:29 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.328 09:03:29 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.328 09:03:29 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.328 09:03:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:34.328 09:03:29 -- spdk/autotest.sh@32 -- # uname -s 00:06:34.328 09:03:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:34.328 09:03:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:34.328 09:03:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:34.328 09:03:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:34.328 09:03:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:34.328 09:03:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:34.328 09:03:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:34.328 09:03:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:34.328 09:03:29 -- spdk/autotest.sh@48 -- # udevadm_pid=54957 00:06:34.328 09:03:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:34.328 09:03:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:34.328 09:03:29 -- pm/common@17 -- # local monitor 00:06:34.328 09:03:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:34.328 09:03:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:34.328 09:03:29 -- pm/common@25 -- # sleep 1 00:06:34.328 09:03:29 -- pm/common@21 -- # date +%s 00:06:34.328 09:03:29 -- pm/common@21 -- # date +%s 00:06:34.328 09:03:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732093409 00:06:34.328 09:03:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732093409 00:06:34.328 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732093409_collect-vmstat.pm.log 00:06:34.328 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732093409_collect-cpu-load.pm.log 00:06:35.264 09:03:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:35.264 09:03:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:35.264 09:03:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.264 09:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:35.264 09:03:30 -- spdk/autotest.sh@59 -- # create_test_list 00:06:35.264 09:03:30 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:35.264 09:03:30 -- common/autotest_common.sh@10 -- # set +x 00:06:35.523 09:03:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:35.523 09:03:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:35.523 09:03:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:35.523 09:03:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:35.523 09:03:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:35.523 09:03:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:35.523 09:03:30 -- common/autotest_common.sh@1457 -- # uname 00:06:35.523 09:03:30 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:35.523 09:03:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:35.523 09:03:30 -- common/autotest_common.sh@1477 -- # uname 00:06:35.523 09:03:30 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:35.523 09:03:30 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:35.523 09:03:30 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:35.523 lcov: LCOV version 1.15 00:06:35.523 09:03:30 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:53.614 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:53.614 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:11.767 09:04:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:11.767 09:04:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.767 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:07:11.767 09:04:06 -- spdk/autotest.sh@78 -- # rm -f 00:07:11.767 09:04:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:12.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.903 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:12.903 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:12.903 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:12.903 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:12.903 09:04:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:12.903 09:04:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:12.903 09:04:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:12.903 09:04:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:12.903 09:04:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:12.903 09:04:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:12.903 09:04:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:12.903 09:04:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:12.903 09:04:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.903 09:04:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.903 09:04:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:12.903 09:04:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:12.903 09:04:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:12.903 No valid GPT data, bailing 00:07:12.903 09:04:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:12.903 09:04:07 -- scripts/common.sh@394 -- # pt= 00:07:12.903 09:04:07 -- scripts/common.sh@395 -- # return 1 00:07:12.903 09:04:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:12.903 1+0 records in 00:07:12.903 1+0 records out 00:07:12.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131422 s, 79.8 MB/s 00:07:12.903 09:04:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:12.903 09:04:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:12.903 09:04:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:12.903 09:04:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:12.903 09:04:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:12.903 No valid GPT data, bailing 00:07:12.903 09:04:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # pt= 00:07:13.163 09:04:08 -- scripts/common.sh@395 -- # return 1 00:07:13.163 09:04:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:13.163 1+0 records in 00:07:13.163 1+0 records out 00:07:13.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525914 s, 199 MB/s 00:07:13.163 09:04:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:13.163 09:04:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:13.163 09:04:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:13.163 09:04:08 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:13.163 09:04:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:13.163 No valid GPT data, bailing 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # pt= 00:07:13.163 09:04:08 -- scripts/common.sh@395 -- # return 1 00:07:13.163 09:04:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:13.163 1+0 records in 00:07:13.163 1+0 records out 00:07:13.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473448 s, 221 MB/s 00:07:13.163 09:04:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:13.163 09:04:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:13.163 09:04:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:13.163 09:04:08 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:13.163 09:04:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:13.163 No valid GPT data, bailing 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # pt= 00:07:13.163 09:04:08 -- scripts/common.sh@395 -- # return 1 00:07:13.163 09:04:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:13.163 1+0 records in 00:07:13.163 1+0 records out 00:07:13.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477108 s, 220 MB/s 00:07:13.163 09:04:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:13.163 09:04:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:13.163 09:04:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:13.163 09:04:08 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:13.163 09:04:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:13.163 No valid GPT data, bailing 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:13.163 09:04:08 -- scripts/common.sh@394 -- # pt= 00:07:13.163 09:04:08 -- scripts/common.sh@395 -- # return 1 00:07:13.163 09:04:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:13.422 1+0 records in 00:07:13.422 1+0 records out 00:07:13.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041634 s, 252 MB/s 00:07:13.422 09:04:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:13.422 09:04:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:13.422 09:04:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:13.422 09:04:08 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:13.422 09:04:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:13.422 No valid GPT data, bailing 00:07:13.422 09:04:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:13.422 09:04:08 -- scripts/common.sh@394 -- # pt= 00:07:13.422 09:04:08 -- scripts/common.sh@395 -- # return 1 00:07:13.422 09:04:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:13.422 1+0 records in 00:07:13.422 1+0 records out 00:07:13.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512708 s, 205 MB/s 00:07:13.422 09:04:08 -- spdk/autotest.sh@105 -- # sync 00:07:13.422 09:04:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:13.422 09:04:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:13.422 09:04:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:15.956 09:04:10 -- spdk/autotest.sh@111 -- # uname -s 00:07:15.956 09:04:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:15.956 09:04:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:15.956 09:04:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:16.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:16.503 Hugepages 00:07:16.503 node hugesize free / total 00:07:16.503 node0 1048576kB 0 / 0 00:07:16.503 node0 2048kB 0 / 0 00:07:16.503 00:07:16.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:16.762 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:16.762 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:16.762 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:17.022 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:17.022 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:17.022 09:04:11 -- spdk/autotest.sh@117 -- # uname -s 00:07:17.022 09:04:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:17.022 09:04:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:17.022 09:04:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:17.589 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:18.157 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.157 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.157 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.157 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:18.415 09:04:13 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:19.351 09:04:14 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:19.351 09:04:14 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:19.351 09:04:14 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:19.351 09:04:14 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:19.351 09:04:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:19.351 09:04:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:19.351 09:04:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:19.351 09:04:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:19.351 09:04:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:19.351 09:04:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:19.351 09:04:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:19.351 09:04:14 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:19.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:19.868 Waiting for block devices as requested 00:07:19.868 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.128 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.128 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.128 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:25.397 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:25.397 09:04:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.397 09:04:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:25.397 09:04:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.397 09:04:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:25.397 09:04:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:25.397 09:04:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:25.397 09:04:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:25.397 09:04:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:25.397 09:04:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:25.397 09:04:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:25.397 09:04:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.397 09:04:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:25.397 09:04:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.397 09:04:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.397 09:04:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.397 09:04:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.397 09:04:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:25.397 09:04:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.397 09:04:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.397 09:04:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.397 09:04:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.397 09:04:20 -- common/autotest_common.sh@1543 -- # continue 00:07:25.397 09:04:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.397 09:04:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:25.397 09:04:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.397 09:04:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.398 09:04:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.398 09:04:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.398 09:04:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1543 -- # continue 00:07:25.398 09:04:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.398 09:04:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.398 09:04:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.398 09:04:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.398 09:04:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1543 -- # continue 00:07:25.398 09:04:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:25.398 09:04:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:25.398 09:04:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:25.398 09:04:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:25.398 09:04:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:25.398 09:04:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:25.398 09:04:20 -- common/autotest_common.sh@1543 -- # continue 00:07:25.398 09:04:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:25.398 09:04:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.398 09:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:25.398 09:04:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:25.398 09:04:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.398 09:04:20 -- common/autotest_common.sh@10 -- # set +x 00:07:25.398 09:04:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:25.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:26.534 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.534 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.793 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.794 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.794 09:04:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:26.794 09:04:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.794 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:26.794 09:04:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:26.794 09:04:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:26.794 09:04:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:26.794 09:04:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:26.794 09:04:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:26.794 09:04:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:26.794 09:04:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:26.794 09:04:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:26.794 09:04:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:26.794 09:04:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:26.794 09:04:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:26.794 09:04:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:26.794 09:04:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:26.794 09:04:21 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:26.794 09:04:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:26.794 09:04:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:26.794 09:04:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:26.794 09:04:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:26.794 09:04:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:26.794 09:04:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:26.794 09:04:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:26.794 09:04:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:26.794 09:04:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:27.053 09:04:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:27.053 09:04:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:27.053 09:04:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:27.053 09:04:21 -- common/autotest_common.sh@1572 -- # return 0 00:07:27.053 09:04:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:27.053 09:04:21 -- common/autotest_common.sh@1580 -- # return 0 00:07:27.053 09:04:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:27.053 09:04:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:27.053 09:04:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:27.053 09:04:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:27.053 09:04:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:27.053 09:04:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.053 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:27.053 09:04:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:27.053 09:04:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.053 09:04:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.053 09:04:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.053 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:07:27.053 ************************************ 00:07:27.053 START TEST env 00:07:27.053 ************************************ 00:07:27.053 09:04:21 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:27.053 * Looking for test storage... 00:07:27.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:27.053 09:04:22 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.053 09:04:22 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.054 09:04:22 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.054 09:04:22 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.054 09:04:22 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.054 09:04:22 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.054 09:04:22 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.054 09:04:22 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.054 09:04:22 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.054 09:04:22 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.054 09:04:22 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.054 09:04:22 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.054 09:04:22 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.054 09:04:22 env -- scripts/common.sh@344 -- # case "$op" in 00:07:27.054 09:04:22 env -- scripts/common.sh@345 -- # : 1 00:07:27.054 09:04:22 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.054 09:04:22 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.054 09:04:22 env -- scripts/common.sh@365 -- # decimal 1 00:07:27.054 09:04:22 env -- scripts/common.sh@353 -- # local d=1 00:07:27.054 09:04:22 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.054 09:04:22 env -- scripts/common.sh@355 -- # echo 1 00:07:27.054 09:04:22 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.054 09:04:22 env -- scripts/common.sh@366 -- # decimal 2 00:07:27.054 09:04:22 env -- scripts/common.sh@353 -- # local d=2 00:07:27.054 09:04:22 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.054 09:04:22 env -- scripts/common.sh@355 -- # echo 2 00:07:27.054 09:04:22 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.054 09:04:22 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.054 09:04:22 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.054 09:04:22 env -- scripts/common.sh@368 -- # return 0 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.054 --rc genhtml_branch_coverage=1 00:07:27.054 --rc genhtml_function_coverage=1 00:07:27.054 --rc genhtml_legend=1 00:07:27.054 --rc geninfo_all_blocks=1 00:07:27.054 --rc geninfo_unexecuted_blocks=1 00:07:27.054 00:07:27.054 ' 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.054 --rc genhtml_branch_coverage=1 00:07:27.054 --rc genhtml_function_coverage=1 00:07:27.054 --rc genhtml_legend=1 00:07:27.054 --rc geninfo_all_blocks=1 00:07:27.054 --rc geninfo_unexecuted_blocks=1 00:07:27.054 00:07:27.054 ' 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.054 --rc genhtml_branch_coverage=1 00:07:27.054 --rc genhtml_function_coverage=1 00:07:27.054 --rc genhtml_legend=1 00:07:27.054 --rc geninfo_all_blocks=1 00:07:27.054 --rc geninfo_unexecuted_blocks=1 00:07:27.054 00:07:27.054 ' 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.054 --rc genhtml_branch_coverage=1 00:07:27.054 --rc genhtml_function_coverage=1 00:07:27.054 --rc genhtml_legend=1 00:07:27.054 --rc geninfo_all_blocks=1 00:07:27.054 --rc geninfo_unexecuted_blocks=1 00:07:27.054 00:07:27.054 ' 00:07:27.054 09:04:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.054 09:04:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.054 09:04:22 env -- common/autotest_common.sh@10 -- # set +x 00:07:27.054 ************************************ 00:07:27.054 START TEST env_memory 00:07:27.054 ************************************ 00:07:27.054 09:04:22 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:27.314 00:07:27.314 00:07:27.314 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.314 http://cunit.sourceforge.net/ 00:07:27.314 00:07:27.314 00:07:27.314 Suite: memory 00:07:27.314 Test: alloc and free memory map ...[2024-11-20 09:04:22.222829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:27.314 passed 00:07:27.314 Test: mem map translation ...[2024-11-20 09:04:22.283916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:27.314 [2024-11-20 09:04:22.284030] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:27.314 [2024-11-20 09:04:22.284205] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:27.314 [2024-11-20 09:04:22.284323] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:27.314 passed 00:07:27.314 Test: mem map registration ...[2024-11-20 09:04:22.383259] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:27.314 [2024-11-20 09:04:22.383383] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:27.314 passed 00:07:27.574 Test: mem map adjacent registrations ...passed 00:07:27.574 00:07:27.574 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.574 suites 1 1 n/a 0 0 00:07:27.574 tests 4 4 4 0 0 00:07:27.574 asserts 152 152 152 0 n/a 00:07:27.574 00:07:27.574 Elapsed time = 0.334 seconds 00:07:27.574 00:07:27.574 real 0m0.378s 00:07:27.574 user 0m0.341s 00:07:27.574 sys 0m0.029s 00:07:27.574 09:04:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.574 09:04:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:27.574 ************************************ 00:07:27.574 END TEST env_memory 00:07:27.574 ************************************ 00:07:27.574 09:04:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:27.574 09:04:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.574 09:04:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.574 09:04:22 env -- common/autotest_common.sh@10 -- # set +x 00:07:27.574 ************************************ 00:07:27.574 START TEST env_vtophys 00:07:27.574 ************************************ 00:07:27.574 09:04:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:27.574 EAL: lib.eal log level changed from notice to debug 00:07:27.574 EAL: Detected lcore 0 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 1 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 2 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 3 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 4 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 5 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 6 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 7 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 8 as core 0 on socket 0 00:07:27.574 EAL: Detected lcore 9 as core 0 on socket 0 00:07:27.574 EAL: Maximum logical cores by configuration: 128 00:07:27.574 EAL: Detected CPU lcores: 10 00:07:27.574 EAL: Detected NUMA nodes: 1 00:07:27.574 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:27.574 EAL: Detected shared linkage of DPDK 00:07:27.574 EAL: No shared files mode enabled, IPC will be disabled 00:07:27.574 EAL: Selected IOVA mode 'PA' 00:07:27.574 EAL: Probing VFIO support... 00:07:27.574 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:27.574 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:27.574 EAL: Ask a virtual area of 0x2e000 bytes 00:07:27.574 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:27.574 EAL: Setting up physically contiguous memory... 00:07:27.574 EAL: Setting maximum number of open files to 524288 00:07:27.574 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:27.574 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:27.574 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.574 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:27.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.574 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.574 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:27.574 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:27.574 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.574 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:27.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.574 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.574 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:27.574 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:27.574 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.574 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:27.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.574 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.574 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:27.574 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:27.574 EAL: Ask a virtual area of 0x61000 bytes 00:07:27.574 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:27.574 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:27.574 EAL: Ask a virtual area of 0x400000000 bytes 00:07:27.574 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:27.574 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:27.574 EAL: Hugepages will be freed exactly as allocated. 00:07:27.574 EAL: No shared files mode enabled, IPC is disabled 00:07:27.574 EAL: No shared files mode enabled, IPC is disabled 00:07:27.833 EAL: TSC frequency is ~2200000 KHz 00:07:27.833 EAL: Main lcore 0 is ready (tid=7f8f4bae9a40;cpuset=[0]) 00:07:27.833 EAL: Trying to obtain current memory policy. 00:07:27.833 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:27.833 EAL: Restoring previous memory policy: 0 00:07:27.833 EAL: request: mp_malloc_sync 00:07:27.833 EAL: No shared files mode enabled, IPC is disabled 00:07:27.833 EAL: Heap on socket 0 was expanded by 2MB 00:07:27.833 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:27.833 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:27.833 EAL: Mem event callback 'spdk:(nil)' registered 00:07:27.833 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:27.833 00:07:27.833 00:07:27.833 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.833 http://cunit.sourceforge.net/ 00:07:27.833 00:07:27.833 00:07:27.833 Suite: components_suite 00:07:28.402 Test: vtophys_malloc_test ...passed 00:07:28.402 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:28.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.402 EAL: Restoring previous memory policy: 4 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was expanded by 4MB 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was shrunk by 4MB 00:07:28.402 EAL: Trying to obtain current memory policy. 00:07:28.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.402 EAL: Restoring previous memory policy: 4 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was expanded by 6MB 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was shrunk by 6MB 00:07:28.402 EAL: Trying to obtain current memory policy. 00:07:28.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.402 EAL: Restoring previous memory policy: 4 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was expanded by 10MB 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was shrunk by 10MB 00:07:28.402 EAL: Trying to obtain current memory policy. 00:07:28.402 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.402 EAL: Restoring previous memory policy: 4 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was expanded by 18MB 00:07:28.402 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.402 EAL: request: mp_malloc_sync 00:07:28.402 EAL: No shared files mode enabled, IPC is disabled 00:07:28.402 EAL: Heap on socket 0 was shrunk by 18MB 00:07:28.662 EAL: Trying to obtain current memory policy. 00:07:28.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.662 EAL: Restoring previous memory policy: 4 00:07:28.662 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.662 EAL: request: mp_malloc_sync 00:07:28.662 EAL: No shared files mode enabled, IPC is disabled 00:07:28.662 EAL: Heap on socket 0 was expanded by 34MB 00:07:28.662 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.662 EAL: request: mp_malloc_sync 00:07:28.662 EAL: No shared files mode enabled, IPC is disabled 00:07:28.662 EAL: Heap on socket 0 was shrunk by 34MB 00:07:28.662 EAL: Trying to obtain current memory policy. 00:07:28.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.662 EAL: Restoring previous memory policy: 4 00:07:28.662 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.662 EAL: request: mp_malloc_sync 00:07:28.662 EAL: No shared files mode enabled, IPC is disabled 00:07:28.662 EAL: Heap on socket 0 was expanded by 66MB 00:07:28.921 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.921 EAL: request: mp_malloc_sync 00:07:28.921 EAL: No shared files mode enabled, IPC is disabled 00:07:28.921 EAL: Heap on socket 0 was shrunk by 66MB 00:07:28.921 EAL: Trying to obtain current memory policy. 00:07:28.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.921 EAL: Restoring previous memory policy: 4 00:07:28.921 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.921 EAL: request: mp_malloc_sync 00:07:28.921 EAL: No shared files mode enabled, IPC is disabled 00:07:28.921 EAL: Heap on socket 0 was expanded by 130MB 00:07:29.179 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.179 EAL: request: mp_malloc_sync 00:07:29.179 EAL: No shared files mode enabled, IPC is disabled 00:07:29.179 EAL: Heap on socket 0 was shrunk by 130MB 00:07:29.438 EAL: Trying to obtain current memory policy. 00:07:29.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.438 EAL: Restoring previous memory policy: 4 00:07:29.438 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.438 EAL: request: mp_malloc_sync 00:07:29.438 EAL: No shared files mode enabled, IPC is disabled 00:07:29.438 EAL: Heap on socket 0 was expanded by 258MB 00:07:30.005 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.005 EAL: request: mp_malloc_sync 00:07:30.005 EAL: No shared files mode enabled, IPC is disabled 00:07:30.005 EAL: Heap on socket 0 was shrunk by 258MB 00:07:30.572 EAL: Trying to obtain current memory policy. 00:07:30.572 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.572 EAL: Restoring previous memory policy: 4 00:07:30.572 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.572 EAL: request: mp_malloc_sync 00:07:30.572 EAL: No shared files mode enabled, IPC is disabled 00:07:30.572 EAL: Heap on socket 0 was expanded by 514MB 00:07:31.509 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.509 EAL: request: mp_malloc_sync 00:07:31.509 EAL: No shared files mode enabled, IPC is disabled 00:07:31.509 EAL: Heap on socket 0 was shrunk by 514MB 00:07:32.446 EAL: Trying to obtain current memory policy. 00:07:32.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.706 EAL: Restoring previous memory policy: 4 00:07:32.706 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.706 EAL: request: mp_malloc_sync 00:07:32.706 EAL: No shared files mode enabled, IPC is disabled 00:07:32.706 EAL: Heap on socket 0 was expanded by 1026MB 00:07:34.610 EAL: Calling mem event callback 'spdk:(nil)' 00:07:34.869 EAL: request: mp_malloc_sync 00:07:34.869 EAL: No shared files mode enabled, IPC is disabled 00:07:34.869 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:36.772 passed 00:07:36.772 00:07:36.772 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.772 suites 1 1 n/a 0 0 00:07:36.772 tests 2 2 2 0 0 00:07:36.772 asserts 5677 5677 5677 0 n/a 00:07:36.772 00:07:36.772 Elapsed time = 8.509 seconds 00:07:36.772 EAL: Calling mem event callback 'spdk:(nil)' 00:07:36.772 EAL: request: mp_malloc_sync 00:07:36.772 EAL: No shared files mode enabled, IPC is disabled 00:07:36.772 EAL: Heap on socket 0 was shrunk by 2MB 00:07:36.772 EAL: No shared files mode enabled, IPC is disabled 00:07:36.772 EAL: No shared files mode enabled, IPC is disabled 00:07:36.772 EAL: No shared files mode enabled, IPC is disabled 00:07:36.772 00:07:36.772 real 0m8.886s 00:07:36.772 user 0m7.472s 00:07:36.772 sys 0m1.232s 00:07:36.772 09:04:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.772 ************************************ 00:07:36.772 END TEST env_vtophys 00:07:36.772 ************************************ 00:07:36.772 09:04:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:36.772 09:04:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:36.772 09:04:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.772 09:04:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.772 09:04:31 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.772 ************************************ 00:07:36.772 START TEST env_pci 00:07:36.772 ************************************ 00:07:36.772 09:04:31 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:36.772 00:07:36.772 00:07:36.772 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.772 http://cunit.sourceforge.net/ 00:07:36.772 00:07:36.772 00:07:36.772 Suite: pci 00:07:36.772 Test: pci_hook ...[2024-11-20 09:04:31.553467] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57843 has claimed it 00:07:36.772 passed 00:07:36.772 00:07:36.772 EAL: Cannot find device (10000:00:01.0) 00:07:36.772 EAL: Failed to attach device on primary process 00:07:36.772 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.772 suites 1 1 n/a 0 0 00:07:36.772 tests 1 1 1 0 0 00:07:36.772 asserts 25 25 25 0 n/a 00:07:36.772 00:07:36.772 Elapsed time = 0.008 seconds 00:07:36.772 00:07:36.772 real 0m0.086s 00:07:36.772 user 0m0.048s 00:07:36.772 sys 0m0.037s 00:07:36.772 09:04:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.772 ************************************ 00:07:36.772 END TEST env_pci 00:07:36.772 ************************************ 00:07:36.772 09:04:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:36.772 09:04:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:36.772 09:04:31 env -- env/env.sh@15 -- # uname 00:07:36.772 09:04:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:36.772 09:04:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:36.772 09:04:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:36.772 09:04:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.772 09:04:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.772 09:04:31 env -- common/autotest_common.sh@10 -- # set +x 00:07:36.772 ************************************ 00:07:36.772 START TEST env_dpdk_post_init 00:07:36.772 ************************************ 00:07:36.772 09:04:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:36.772 EAL: Detected CPU lcores: 10 00:07:36.772 EAL: Detected NUMA nodes: 1 00:07:36.772 EAL: Detected shared linkage of DPDK 00:07:36.772 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:36.772 EAL: Selected IOVA mode 'PA' 00:07:36.772 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:37.032 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:37.032 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:37.032 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:37.032 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:37.032 Starting DPDK initialization... 00:07:37.032 Starting SPDK post initialization... 00:07:37.032 SPDK NVMe probe 00:07:37.032 Attaching to 0000:00:10.0 00:07:37.032 Attaching to 0000:00:11.0 00:07:37.032 Attaching to 0000:00:12.0 00:07:37.032 Attaching to 0000:00:13.0 00:07:37.032 Attached to 0000:00:10.0 00:07:37.032 Attached to 0000:00:11.0 00:07:37.032 Attached to 0000:00:13.0 00:07:37.032 Attached to 0000:00:12.0 00:07:37.032 Cleaning up... 00:07:37.032 00:07:37.032 real 0m0.331s 00:07:37.032 user 0m0.119s 00:07:37.032 sys 0m0.114s 00:07:37.032 09:04:31 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.032 09:04:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:37.032 ************************************ 00:07:37.032 END TEST env_dpdk_post_init 00:07:37.032 ************************************ 00:07:37.032 09:04:32 env -- env/env.sh@26 -- # uname 00:07:37.032 09:04:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:37.032 09:04:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:37.032 09:04:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.032 09:04:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.032 09:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.032 ************************************ 00:07:37.032 START TEST env_mem_callbacks 00:07:37.032 ************************************ 00:07:37.032 09:04:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:37.032 EAL: Detected CPU lcores: 10 00:07:37.032 EAL: Detected NUMA nodes: 1 00:07:37.032 EAL: Detected shared linkage of DPDK 00:07:37.032 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:37.032 EAL: Selected IOVA mode 'PA' 00:07:37.291 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:37.291 00:07:37.291 00:07:37.291 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.291 http://cunit.sourceforge.net/ 00:07:37.291 00:07:37.291 00:07:37.291 Suite: memory 00:07:37.291 Test: test ... 00:07:37.291 register 0x200000200000 2097152 00:07:37.291 malloc 3145728 00:07:37.291 register 0x200000400000 4194304 00:07:37.291 buf 0x2000004fffc0 len 3145728 PASSED 00:07:37.291 malloc 64 00:07:37.291 buf 0x2000004ffec0 len 64 PASSED 00:07:37.291 malloc 4194304 00:07:37.291 register 0x200000800000 6291456 00:07:37.291 buf 0x2000009fffc0 len 4194304 PASSED 00:07:37.291 free 0x2000004fffc0 3145728 00:07:37.291 free 0x2000004ffec0 64 00:07:37.291 unregister 0x200000400000 4194304 PASSED 00:07:37.291 free 0x2000009fffc0 4194304 00:07:37.291 unregister 0x200000800000 6291456 PASSED 00:07:37.291 malloc 8388608 00:07:37.291 register 0x200000400000 10485760 00:07:37.291 buf 0x2000005fffc0 len 8388608 PASSED 00:07:37.291 free 0x2000005fffc0 8388608 00:07:37.291 unregister 0x200000400000 10485760 PASSED 00:07:37.291 passed 00:07:37.291 00:07:37.291 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.291 suites 1 1 n/a 0 0 00:07:37.291 tests 1 1 1 0 0 00:07:37.291 asserts 15 15 15 0 n/a 00:07:37.291 00:07:37.291 Elapsed time = 0.064 seconds 00:07:37.291 00:07:37.291 real 0m0.278s 00:07:37.291 user 0m0.094s 00:07:37.291 sys 0m0.083s 00:07:37.291 09:04:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.291 09:04:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:37.291 ************************************ 00:07:37.291 END TEST env_mem_callbacks 00:07:37.291 ************************************ 00:07:37.291 ************************************ 00:07:37.291 END TEST env 00:07:37.291 ************************************ 00:07:37.291 00:07:37.291 real 0m10.433s 00:07:37.291 user 0m8.267s 00:07:37.291 sys 0m1.763s 00:07:37.291 09:04:32 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.291 09:04:32 env -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 09:04:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:37.552 09:04:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.552 09:04:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.552 09:04:32 -- common/autotest_common.sh@10 -- # set +x 00:07:37.552 ************************************ 00:07:37.552 START TEST rpc 00:07:37.552 ************************************ 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:37.552 * Looking for test storage... 00:07:37.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.552 09:04:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.552 09:04:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.552 09:04:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.552 09:04:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.552 09:04:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.552 09:04:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:37.552 09:04:32 rpc -- scripts/common.sh@345 -- # : 1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.552 09:04:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.552 09:04:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@353 -- # local d=1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.552 09:04:32 rpc -- scripts/common.sh@355 -- # echo 1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.552 09:04:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@353 -- # local d=2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.552 09:04:32 rpc -- scripts/common.sh@355 -- # echo 2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.552 09:04:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.552 09:04:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.552 09:04:32 rpc -- scripts/common.sh@368 -- # return 0 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.552 --rc genhtml_branch_coverage=1 00:07:37.552 --rc genhtml_function_coverage=1 00:07:37.552 --rc genhtml_legend=1 00:07:37.552 --rc geninfo_all_blocks=1 00:07:37.552 --rc geninfo_unexecuted_blocks=1 00:07:37.552 00:07:37.552 ' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.552 --rc genhtml_branch_coverage=1 00:07:37.552 --rc genhtml_function_coverage=1 00:07:37.552 --rc genhtml_legend=1 00:07:37.552 --rc geninfo_all_blocks=1 00:07:37.552 --rc geninfo_unexecuted_blocks=1 00:07:37.552 00:07:37.552 ' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.552 --rc genhtml_branch_coverage=1 00:07:37.552 --rc genhtml_function_coverage=1 00:07:37.552 --rc genhtml_legend=1 00:07:37.552 --rc geninfo_all_blocks=1 00:07:37.552 --rc geninfo_unexecuted_blocks=1 00:07:37.552 00:07:37.552 ' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.552 --rc genhtml_branch_coverage=1 00:07:37.552 --rc genhtml_function_coverage=1 00:07:37.552 --rc genhtml_legend=1 00:07:37.552 --rc geninfo_all_blocks=1 00:07:37.552 --rc geninfo_unexecuted_blocks=1 00:07:37.552 00:07:37.552 ' 00:07:37.552 09:04:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57970 00:07:37.552 09:04:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:37.552 09:04:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57970 00:07:37.552 09:04:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@835 -- # '[' -z 57970 ']' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.552 09:04:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.812 [2024-11-20 09:04:32.751012] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:37.812 [2024-11-20 09:04:32.751199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57970 ] 00:07:38.071 [2024-11-20 09:04:32.948384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.071 [2024-11-20 09:04:33.108879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:38.071 [2024-11-20 09:04:33.109003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57970' to capture a snapshot of events at runtime. 00:07:38.071 [2024-11-20 09:04:33.109031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.071 [2024-11-20 09:04:33.109050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.071 [2024-11-20 09:04:33.109065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57970 for offline analysis/debug. 00:07:38.071 [2024-11-20 09:04:33.110612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.008 09:04:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.008 09:04:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.008 09:04:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:39.008 09:04:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:39.008 09:04:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:39.008 09:04:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:39.008 09:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.008 09:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.008 09:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.008 ************************************ 00:07:39.008 START TEST rpc_integrity 00:07:39.008 ************************************ 00:07:39.008 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:39.008 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:39.008 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.008 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.008 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.008 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:39.008 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:39.008 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:39.008 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:39.008 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.008 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.267 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.267 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:39.267 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:39.267 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.267 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.267 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.267 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:39.267 { 00:07:39.267 "name": "Malloc0", 00:07:39.267 "aliases": [ 00:07:39.267 "55485e97-3c57-438d-9acc-0b8157fab6e0" 00:07:39.267 ], 00:07:39.267 "product_name": "Malloc disk", 00:07:39.267 "block_size": 512, 00:07:39.267 "num_blocks": 16384, 00:07:39.267 "uuid": "55485e97-3c57-438d-9acc-0b8157fab6e0", 00:07:39.267 "assigned_rate_limits": { 00:07:39.267 "rw_ios_per_sec": 0, 00:07:39.267 "rw_mbytes_per_sec": 0, 00:07:39.267 "r_mbytes_per_sec": 0, 00:07:39.267 "w_mbytes_per_sec": 0 00:07:39.267 }, 00:07:39.267 "claimed": false, 00:07:39.267 "zoned": false, 00:07:39.267 "supported_io_types": { 00:07:39.267 "read": true, 00:07:39.267 "write": true, 00:07:39.267 "unmap": true, 00:07:39.267 "flush": true, 00:07:39.267 "reset": true, 00:07:39.267 "nvme_admin": false, 00:07:39.267 "nvme_io": false, 00:07:39.267 "nvme_io_md": false, 00:07:39.267 "write_zeroes": true, 00:07:39.267 "zcopy": true, 00:07:39.267 "get_zone_info": false, 00:07:39.267 "zone_management": false, 00:07:39.267 "zone_append": false, 00:07:39.267 "compare": false, 00:07:39.267 "compare_and_write": false, 00:07:39.267 "abort": true, 00:07:39.267 "seek_hole": false, 00:07:39.267 "seek_data": false, 00:07:39.267 "copy": true, 00:07:39.267 "nvme_iov_md": false 00:07:39.267 }, 00:07:39.267 "memory_domains": [ 00:07:39.267 { 00:07:39.267 "dma_device_id": "system", 00:07:39.267 "dma_device_type": 1 00:07:39.267 }, 00:07:39.267 { 00:07:39.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.267 "dma_device_type": 2 00:07:39.267 } 00:07:39.267 ], 00:07:39.267 "driver_specific": {} 00:07:39.267 } 00:07:39.267 ]' 00:07:39.267 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.268 [2024-11-20 09:04:34.221461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:39.268 [2024-11-20 09:04:34.221568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.268 [2024-11-20 09:04:34.221641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.268 [2024-11-20 09:04:34.221694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.268 [2024-11-20 09:04:34.225212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.268 [2024-11-20 09:04:34.225292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:39.268 Passthru0 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:39.268 { 00:07:39.268 "name": "Malloc0", 00:07:39.268 "aliases": [ 00:07:39.268 "55485e97-3c57-438d-9acc-0b8157fab6e0" 00:07:39.268 ], 00:07:39.268 "product_name": "Malloc disk", 00:07:39.268 "block_size": 512, 00:07:39.268 "num_blocks": 16384, 00:07:39.268 "uuid": "55485e97-3c57-438d-9acc-0b8157fab6e0", 00:07:39.268 "assigned_rate_limits": { 00:07:39.268 "rw_ios_per_sec": 0, 00:07:39.268 "rw_mbytes_per_sec": 0, 00:07:39.268 "r_mbytes_per_sec": 0, 00:07:39.268 "w_mbytes_per_sec": 0 00:07:39.268 }, 00:07:39.268 "claimed": true, 00:07:39.268 "claim_type": "exclusive_write", 00:07:39.268 "zoned": false, 00:07:39.268 "supported_io_types": { 00:07:39.268 "read": true, 00:07:39.268 "write": true, 00:07:39.268 "unmap": true, 00:07:39.268 "flush": true, 00:07:39.268 "reset": true, 00:07:39.268 "nvme_admin": false, 00:07:39.268 "nvme_io": false, 00:07:39.268 "nvme_io_md": false, 00:07:39.268 "write_zeroes": true, 00:07:39.268 "zcopy": true, 00:07:39.268 "get_zone_info": false, 00:07:39.268 "zone_management": false, 00:07:39.268 "zone_append": false, 00:07:39.268 "compare": false, 00:07:39.268 "compare_and_write": false, 00:07:39.268 "abort": true, 00:07:39.268 "seek_hole": false, 00:07:39.268 "seek_data": false, 00:07:39.268 "copy": true, 00:07:39.268 "nvme_iov_md": false 00:07:39.268 }, 00:07:39.268 "memory_domains": [ 00:07:39.268 { 00:07:39.268 "dma_device_id": "system", 00:07:39.268 "dma_device_type": 1 00:07:39.268 }, 00:07:39.268 { 00:07:39.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.268 "dma_device_type": 2 00:07:39.268 } 00:07:39.268 ], 00:07:39.268 "driver_specific": {} 00:07:39.268 }, 00:07:39.268 { 00:07:39.268 "name": "Passthru0", 00:07:39.268 "aliases": [ 00:07:39.268 "73eb3182-9922-5792-b67f-56ceb60aaea7" 00:07:39.268 ], 00:07:39.268 "product_name": "passthru", 00:07:39.268 "block_size": 512, 00:07:39.268 "num_blocks": 16384, 00:07:39.268 "uuid": "73eb3182-9922-5792-b67f-56ceb60aaea7", 00:07:39.268 "assigned_rate_limits": { 00:07:39.268 "rw_ios_per_sec": 0, 00:07:39.268 "rw_mbytes_per_sec": 0, 00:07:39.268 "r_mbytes_per_sec": 0, 00:07:39.268 "w_mbytes_per_sec": 0 00:07:39.268 }, 00:07:39.268 "claimed": false, 00:07:39.268 "zoned": false, 00:07:39.268 "supported_io_types": { 00:07:39.268 "read": true, 00:07:39.268 "write": true, 00:07:39.268 "unmap": true, 00:07:39.268 "flush": true, 00:07:39.268 "reset": true, 00:07:39.268 "nvme_admin": false, 00:07:39.268 "nvme_io": false, 00:07:39.268 "nvme_io_md": false, 00:07:39.268 "write_zeroes": true, 00:07:39.268 "zcopy": true, 00:07:39.268 "get_zone_info": false, 00:07:39.268 "zone_management": false, 00:07:39.268 "zone_append": false, 00:07:39.268 "compare": false, 00:07:39.268 "compare_and_write": false, 00:07:39.268 "abort": true, 00:07:39.268 "seek_hole": false, 00:07:39.268 "seek_data": false, 00:07:39.268 "copy": true, 00:07:39.268 "nvme_iov_md": false 00:07:39.268 }, 00:07:39.268 "memory_domains": [ 00:07:39.268 { 00:07:39.268 "dma_device_id": "system", 00:07:39.268 "dma_device_type": 1 00:07:39.268 }, 00:07:39.268 { 00:07:39.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.268 "dma_device_type": 2 00:07:39.268 } 00:07:39.268 ], 00:07:39.268 "driver_specific": { 00:07:39.268 "passthru": { 00:07:39.268 "name": "Passthru0", 00:07:39.268 "base_bdev_name": "Malloc0" 00:07:39.268 } 00:07:39.268 } 00:07:39.268 } 00:07:39.268 ]' 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.268 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:39.268 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:39.538 09:04:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:39.538 00:07:39.538 real 0m0.365s 00:07:39.538 user 0m0.237s 00:07:39.538 sys 0m0.029s 00:07:39.538 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.538 ************************************ 00:07:39.538 END TEST rpc_integrity 00:07:39.538 09:04:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 ************************************ 00:07:39.538 09:04:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:39.538 09:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.538 09:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.538 09:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 ************************************ 00:07:39.538 START TEST rpc_plugins 00:07:39.538 ************************************ 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:39.538 { 00:07:39.538 "name": "Malloc1", 00:07:39.538 "aliases": [ 00:07:39.538 "14bdb6af-bf05-46d8-95f5-24fd75ccf5ba" 00:07:39.538 ], 00:07:39.538 "product_name": "Malloc disk", 00:07:39.538 "block_size": 4096, 00:07:39.538 "num_blocks": 256, 00:07:39.538 "uuid": "14bdb6af-bf05-46d8-95f5-24fd75ccf5ba", 00:07:39.538 "assigned_rate_limits": { 00:07:39.538 "rw_ios_per_sec": 0, 00:07:39.538 "rw_mbytes_per_sec": 0, 00:07:39.538 "r_mbytes_per_sec": 0, 00:07:39.538 "w_mbytes_per_sec": 0 00:07:39.538 }, 00:07:39.538 "claimed": false, 00:07:39.538 "zoned": false, 00:07:39.538 "supported_io_types": { 00:07:39.538 "read": true, 00:07:39.538 "write": true, 00:07:39.538 "unmap": true, 00:07:39.538 "flush": true, 00:07:39.538 "reset": true, 00:07:39.538 "nvme_admin": false, 00:07:39.538 "nvme_io": false, 00:07:39.538 "nvme_io_md": false, 00:07:39.538 "write_zeroes": true, 00:07:39.538 "zcopy": true, 00:07:39.538 "get_zone_info": false, 00:07:39.538 "zone_management": false, 00:07:39.538 "zone_append": false, 00:07:39.538 "compare": false, 00:07:39.538 "compare_and_write": false, 00:07:39.538 "abort": true, 00:07:39.538 "seek_hole": false, 00:07:39.538 "seek_data": false, 00:07:39.538 "copy": true, 00:07:39.538 "nvme_iov_md": false 00:07:39.538 }, 00:07:39.538 "memory_domains": [ 00:07:39.538 { 00:07:39.538 "dma_device_id": "system", 00:07:39.538 "dma_device_type": 1 00:07:39.538 }, 00:07:39.538 { 00:07:39.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.538 "dma_device_type": 2 00:07:39.538 } 00:07:39.538 ], 00:07:39.538 "driver_specific": {} 00:07:39.538 } 00:07:39.538 ]' 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:39.538 09:04:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:39.538 00:07:39.538 real 0m0.157s 00:07:39.538 user 0m0.101s 00:07:39.538 sys 0m0.018s 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.538 ************************************ 00:07:39.538 END TEST rpc_plugins 00:07:39.538 09:04:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:39.538 ************************************ 00:07:39.830 09:04:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:39.830 09:04:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.830 09:04:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.830 09:04:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.830 ************************************ 00:07:39.830 START TEST rpc_trace_cmd_test 00:07:39.830 ************************************ 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:39.830 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57970", 00:07:39.830 "tpoint_group_mask": "0x8", 00:07:39.830 "iscsi_conn": { 00:07:39.830 "mask": "0x2", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "scsi": { 00:07:39.830 "mask": "0x4", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "bdev": { 00:07:39.830 "mask": "0x8", 00:07:39.830 "tpoint_mask": "0xffffffffffffffff" 00:07:39.830 }, 00:07:39.830 "nvmf_rdma": { 00:07:39.830 "mask": "0x10", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "nvmf_tcp": { 00:07:39.830 "mask": "0x20", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "ftl": { 00:07:39.830 "mask": "0x40", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "blobfs": { 00:07:39.830 "mask": "0x80", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "dsa": { 00:07:39.830 "mask": "0x200", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "thread": { 00:07:39.830 "mask": "0x400", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "nvme_pcie": { 00:07:39.830 "mask": "0x800", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "iaa": { 00:07:39.830 "mask": "0x1000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "nvme_tcp": { 00:07:39.830 "mask": "0x2000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "bdev_nvme": { 00:07:39.830 "mask": "0x4000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "sock": { 00:07:39.830 "mask": "0x8000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "blob": { 00:07:39.830 "mask": "0x10000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "bdev_raid": { 00:07:39.830 "mask": "0x20000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 }, 00:07:39.830 "scheduler": { 00:07:39.830 "mask": "0x40000", 00:07:39.830 "tpoint_mask": "0x0" 00:07:39.830 } 00:07:39.830 }' 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:39.830 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:39.831 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:39.831 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:40.090 09:04:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:40.090 00:07:40.090 real 0m0.281s 00:07:40.090 user 0m0.243s 00:07:40.090 sys 0m0.027s 00:07:40.090 09:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.090 ************************************ 00:07:40.090 END TEST rpc_trace_cmd_test 00:07:40.090 09:04:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.090 ************************************ 00:07:40.090 09:04:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:40.090 09:04:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:40.090 09:04:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:40.090 09:04:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.090 09:04:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.090 09:04:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.090 ************************************ 00:07:40.090 START TEST rpc_daemon_integrity 00:07:40.090 ************************************ 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:40.090 { 00:07:40.090 "name": "Malloc2", 00:07:40.090 "aliases": [ 00:07:40.090 "ac00b60c-65e9-45e9-9062-540c35648cf4" 00:07:40.090 ], 00:07:40.090 "product_name": "Malloc disk", 00:07:40.090 "block_size": 512, 00:07:40.090 "num_blocks": 16384, 00:07:40.090 "uuid": "ac00b60c-65e9-45e9-9062-540c35648cf4", 00:07:40.090 "assigned_rate_limits": { 00:07:40.090 "rw_ios_per_sec": 0, 00:07:40.090 "rw_mbytes_per_sec": 0, 00:07:40.090 "r_mbytes_per_sec": 0, 00:07:40.090 "w_mbytes_per_sec": 0 00:07:40.090 }, 00:07:40.090 "claimed": false, 00:07:40.090 "zoned": false, 00:07:40.090 "supported_io_types": { 00:07:40.090 "read": true, 00:07:40.090 "write": true, 00:07:40.090 "unmap": true, 00:07:40.090 "flush": true, 00:07:40.090 "reset": true, 00:07:40.090 "nvme_admin": false, 00:07:40.090 "nvme_io": false, 00:07:40.090 "nvme_io_md": false, 00:07:40.090 "write_zeroes": true, 00:07:40.090 "zcopy": true, 00:07:40.090 "get_zone_info": false, 00:07:40.090 "zone_management": false, 00:07:40.090 "zone_append": false, 00:07:40.090 "compare": false, 00:07:40.090 "compare_and_write": false, 00:07:40.090 "abort": true, 00:07:40.090 "seek_hole": false, 00:07:40.090 "seek_data": false, 00:07:40.090 "copy": true, 00:07:40.090 "nvme_iov_md": false 00:07:40.090 }, 00:07:40.090 "memory_domains": [ 00:07:40.090 { 00:07:40.090 "dma_device_id": "system", 00:07:40.090 "dma_device_type": 1 00:07:40.090 }, 00:07:40.090 { 00:07:40.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.090 "dma_device_type": 2 00:07:40.090 } 00:07:40.090 ], 00:07:40.090 "driver_specific": {} 00:07:40.090 } 00:07:40.090 ]' 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.090 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.090 [2024-11-20 09:04:35.181173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:40.090 [2024-11-20 09:04:35.181313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.091 [2024-11-20 09:04:35.181356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:40.091 [2024-11-20 09:04:35.181378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.091 [2024-11-20 09:04:35.184714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.091 [2024-11-20 09:04:35.184769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:40.091 Passthru0 00:07:40.091 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.091 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:40.091 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.091 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:40.350 { 00:07:40.350 "name": "Malloc2", 00:07:40.350 "aliases": [ 00:07:40.350 "ac00b60c-65e9-45e9-9062-540c35648cf4" 00:07:40.350 ], 00:07:40.350 "product_name": "Malloc disk", 00:07:40.350 "block_size": 512, 00:07:40.350 "num_blocks": 16384, 00:07:40.350 "uuid": "ac00b60c-65e9-45e9-9062-540c35648cf4", 00:07:40.350 "assigned_rate_limits": { 00:07:40.350 "rw_ios_per_sec": 0, 00:07:40.350 "rw_mbytes_per_sec": 0, 00:07:40.350 "r_mbytes_per_sec": 0, 00:07:40.350 "w_mbytes_per_sec": 0 00:07:40.350 }, 00:07:40.350 "claimed": true, 00:07:40.350 "claim_type": "exclusive_write", 00:07:40.350 "zoned": false, 00:07:40.350 "supported_io_types": { 00:07:40.350 "read": true, 00:07:40.350 "write": true, 00:07:40.350 "unmap": true, 00:07:40.350 "flush": true, 00:07:40.350 "reset": true, 00:07:40.350 "nvme_admin": false, 00:07:40.350 "nvme_io": false, 00:07:40.350 "nvme_io_md": false, 00:07:40.350 "write_zeroes": true, 00:07:40.350 "zcopy": true, 00:07:40.350 "get_zone_info": false, 00:07:40.350 "zone_management": false, 00:07:40.350 "zone_append": false, 00:07:40.350 "compare": false, 00:07:40.350 "compare_and_write": false, 00:07:40.350 "abort": true, 00:07:40.350 "seek_hole": false, 00:07:40.350 "seek_data": false, 00:07:40.350 "copy": true, 00:07:40.350 "nvme_iov_md": false 00:07:40.350 }, 00:07:40.350 "memory_domains": [ 00:07:40.350 { 00:07:40.350 "dma_device_id": "system", 00:07:40.350 "dma_device_type": 1 00:07:40.350 }, 00:07:40.350 { 00:07:40.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.350 "dma_device_type": 2 00:07:40.350 } 00:07:40.350 ], 00:07:40.350 "driver_specific": {} 00:07:40.350 }, 00:07:40.350 { 00:07:40.350 "name": "Passthru0", 00:07:40.350 "aliases": [ 00:07:40.350 "2efdaa56-934c-594c-a37c-31b3c1625cc9" 00:07:40.350 ], 00:07:40.350 "product_name": "passthru", 00:07:40.350 "block_size": 512, 00:07:40.350 "num_blocks": 16384, 00:07:40.350 "uuid": "2efdaa56-934c-594c-a37c-31b3c1625cc9", 00:07:40.350 "assigned_rate_limits": { 00:07:40.350 "rw_ios_per_sec": 0, 00:07:40.350 "rw_mbytes_per_sec": 0, 00:07:40.350 "r_mbytes_per_sec": 0, 00:07:40.350 "w_mbytes_per_sec": 0 00:07:40.350 }, 00:07:40.350 "claimed": false, 00:07:40.350 "zoned": false, 00:07:40.350 "supported_io_types": { 00:07:40.350 "read": true, 00:07:40.350 "write": true, 00:07:40.350 "unmap": true, 00:07:40.350 "flush": true, 00:07:40.350 "reset": true, 00:07:40.350 "nvme_admin": false, 00:07:40.350 "nvme_io": false, 00:07:40.350 "nvme_io_md": false, 00:07:40.350 "write_zeroes": true, 00:07:40.350 "zcopy": true, 00:07:40.350 "get_zone_info": false, 00:07:40.350 "zone_management": false, 00:07:40.350 "zone_append": false, 00:07:40.350 "compare": false, 00:07:40.350 "compare_and_write": false, 00:07:40.350 "abort": true, 00:07:40.350 "seek_hole": false, 00:07:40.350 "seek_data": false, 00:07:40.350 "copy": true, 00:07:40.350 "nvme_iov_md": false 00:07:40.350 }, 00:07:40.350 "memory_domains": [ 00:07:40.350 { 00:07:40.350 "dma_device_id": "system", 00:07:40.350 "dma_device_type": 1 00:07:40.350 }, 00:07:40.350 { 00:07:40.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.350 "dma_device_type": 2 00:07:40.350 } 00:07:40.350 ], 00:07:40.350 "driver_specific": { 00:07:40.350 "passthru": { 00:07:40.350 "name": "Passthru0", 00:07:40.350 "base_bdev_name": "Malloc2" 00:07:40.350 } 00:07:40.350 } 00:07:40.350 } 00:07:40.350 ]' 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:40.350 09:04:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:40.350 00:07:40.351 real 0m0.357s 00:07:40.351 user 0m0.216s 00:07:40.351 sys 0m0.044s 00:07:40.351 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.351 ************************************ 00:07:40.351 END TEST rpc_daemon_integrity 00:07:40.351 ************************************ 00:07:40.351 09:04:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:40.351 09:04:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:40.351 09:04:35 rpc -- rpc/rpc.sh@84 -- # killprocess 57970 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 57970 ']' 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@958 -- # kill -0 57970 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@959 -- # uname 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57970 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.351 killing process with pid 57970 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57970' 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@973 -- # kill 57970 00:07:40.351 09:04:35 rpc -- common/autotest_common.sh@978 -- # wait 57970 00:07:42.888 00:07:42.888 real 0m5.448s 00:07:42.888 user 0m6.079s 00:07:42.888 sys 0m1.015s 00:07:42.888 09:04:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.888 09:04:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.888 ************************************ 00:07:42.888 END TEST rpc 00:07:42.888 ************************************ 00:07:42.888 09:04:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:42.888 09:04:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.888 09:04:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.888 09:04:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.888 ************************************ 00:07:42.888 START TEST skip_rpc 00:07:42.888 ************************************ 00:07:42.888 09:04:37 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:42.888 * Looking for test storage... 00:07:43.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:43.147 09:04:38 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.147 09:04:38 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.148 09:04:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.148 --rc genhtml_branch_coverage=1 00:07:43.148 --rc genhtml_function_coverage=1 00:07:43.148 --rc genhtml_legend=1 00:07:43.148 --rc geninfo_all_blocks=1 00:07:43.148 --rc geninfo_unexecuted_blocks=1 00:07:43.148 00:07:43.148 ' 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.148 --rc genhtml_branch_coverage=1 00:07:43.148 --rc genhtml_function_coverage=1 00:07:43.148 --rc genhtml_legend=1 00:07:43.148 --rc geninfo_all_blocks=1 00:07:43.148 --rc geninfo_unexecuted_blocks=1 00:07:43.148 00:07:43.148 ' 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.148 --rc genhtml_branch_coverage=1 00:07:43.148 --rc genhtml_function_coverage=1 00:07:43.148 --rc genhtml_legend=1 00:07:43.148 --rc geninfo_all_blocks=1 00:07:43.148 --rc geninfo_unexecuted_blocks=1 00:07:43.148 00:07:43.148 ' 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.148 --rc genhtml_branch_coverage=1 00:07:43.148 --rc genhtml_function_coverage=1 00:07:43.148 --rc genhtml_legend=1 00:07:43.148 --rc geninfo_all_blocks=1 00:07:43.148 --rc geninfo_unexecuted_blocks=1 00:07:43.148 00:07:43.148 ' 00:07:43.148 09:04:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:43.148 09:04:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:43.148 09:04:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.148 09:04:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.148 ************************************ 00:07:43.148 START TEST skip_rpc 00:07:43.148 ************************************ 00:07:43.148 09:04:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:43.148 09:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58205 00:07:43.148 09:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:43.148 09:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:43.148 09:04:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:43.148 [2024-11-20 09:04:38.238248] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:43.148 [2024-11-20 09:04:38.238464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:07:43.407 [2024-11-20 09:04:38.414558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.665 [2024-11-20 09:04:38.540850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58205 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58205 ']' 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58205 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58205 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.072 killing process with pid 58205 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58205' 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58205 00:07:49.072 09:04:43 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58205 00:07:50.449 00:07:50.449 real 0m7.397s 00:07:50.449 user 0m6.797s 00:07:50.449 sys 0m0.484s 00:07:50.449 09:04:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.449 ************************************ 00:07:50.449 END TEST skip_rpc 00:07:50.449 ************************************ 00:07:50.449 09:04:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.449 09:04:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:50.449 09:04:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.449 09:04:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.449 09:04:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.708 ************************************ 00:07:50.708 START TEST skip_rpc_with_json 00:07:50.708 ************************************ 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58309 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58309 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58309 ']' 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.708 09:04:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:50.708 [2024-11-20 09:04:45.680123] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:50.708 [2024-11-20 09:04:45.680308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58309 ] 00:07:50.966 [2024-11-20 09:04:45.851725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.966 [2024-11-20 09:04:45.976286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:51.901 [2024-11-20 09:04:46.883401] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:51.901 request: 00:07:51.901 { 00:07:51.901 "trtype": "tcp", 00:07:51.901 "method": "nvmf_get_transports", 00:07:51.901 "req_id": 1 00:07:51.901 } 00:07:51.901 Got JSON-RPC error response 00:07:51.901 response: 00:07:51.901 { 00:07:51.901 "code": -19, 00:07:51.901 "message": "No such device" 00:07:51.901 } 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:51.901 [2024-11-20 09:04:46.895511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.901 09:04:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.160 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.160 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:52.160 { 00:07:52.160 "subsystems": [ 00:07:52.160 { 00:07:52.160 "subsystem": "fsdev", 00:07:52.160 "config": [ 00:07:52.160 { 00:07:52.160 "method": "fsdev_set_opts", 00:07:52.160 "params": { 00:07:52.160 "fsdev_io_pool_size": 65535, 00:07:52.160 "fsdev_io_cache_size": 256 00:07:52.160 } 00:07:52.160 } 00:07:52.160 ] 00:07:52.160 }, 00:07:52.160 { 00:07:52.160 "subsystem": "keyring", 00:07:52.160 "config": [] 00:07:52.160 }, 00:07:52.160 { 00:07:52.160 "subsystem": "iobuf", 00:07:52.160 "config": [ 00:07:52.160 { 00:07:52.160 "method": "iobuf_set_options", 00:07:52.160 "params": { 00:07:52.160 "small_pool_count": 8192, 00:07:52.160 "large_pool_count": 1024, 00:07:52.160 "small_bufsize": 8192, 00:07:52.160 "large_bufsize": 135168, 00:07:52.160 "enable_numa": false 00:07:52.160 } 00:07:52.160 } 00:07:52.160 ] 00:07:52.160 }, 00:07:52.160 { 00:07:52.160 "subsystem": "sock", 00:07:52.160 "config": [ 00:07:52.160 { 00:07:52.160 "method": "sock_set_default_impl", 00:07:52.160 "params": { 00:07:52.160 "impl_name": "posix" 00:07:52.160 } 00:07:52.160 }, 00:07:52.160 { 00:07:52.160 "method": "sock_impl_set_options", 00:07:52.160 "params": { 00:07:52.160 "impl_name": "ssl", 00:07:52.160 "recv_buf_size": 4096, 00:07:52.160 "send_buf_size": 4096, 00:07:52.160 "enable_recv_pipe": true, 00:07:52.161 "enable_quickack": false, 00:07:52.161 "enable_placement_id": 0, 00:07:52.161 "enable_zerocopy_send_server": true, 00:07:52.161 "enable_zerocopy_send_client": false, 00:07:52.161 "zerocopy_threshold": 0, 00:07:52.161 "tls_version": 0, 00:07:52.161 "enable_ktls": false 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "sock_impl_set_options", 00:07:52.161 "params": { 00:07:52.161 "impl_name": "posix", 00:07:52.161 "recv_buf_size": 2097152, 00:07:52.161 "send_buf_size": 2097152, 00:07:52.161 "enable_recv_pipe": true, 00:07:52.161 "enable_quickack": false, 00:07:52.161 "enable_placement_id": 0, 00:07:52.161 "enable_zerocopy_send_server": true, 00:07:52.161 "enable_zerocopy_send_client": false, 00:07:52.161 "zerocopy_threshold": 0, 00:07:52.161 "tls_version": 0, 00:07:52.161 "enable_ktls": false 00:07:52.161 } 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "vmd", 00:07:52.161 "config": [] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "accel", 00:07:52.161 "config": [ 00:07:52.161 { 00:07:52.161 "method": "accel_set_options", 00:07:52.161 "params": { 00:07:52.161 "small_cache_size": 128, 00:07:52.161 "large_cache_size": 16, 00:07:52.161 "task_count": 2048, 00:07:52.161 "sequence_count": 2048, 00:07:52.161 "buf_count": 2048 00:07:52.161 } 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "bdev", 00:07:52.161 "config": [ 00:07:52.161 { 00:07:52.161 "method": "bdev_set_options", 00:07:52.161 "params": { 00:07:52.161 "bdev_io_pool_size": 65535, 00:07:52.161 "bdev_io_cache_size": 256, 00:07:52.161 "bdev_auto_examine": true, 00:07:52.161 "iobuf_small_cache_size": 128, 00:07:52.161 "iobuf_large_cache_size": 16 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "bdev_raid_set_options", 00:07:52.161 "params": { 00:07:52.161 "process_window_size_kb": 1024, 00:07:52.161 "process_max_bandwidth_mb_sec": 0 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "bdev_iscsi_set_options", 00:07:52.161 "params": { 00:07:52.161 "timeout_sec": 30 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "bdev_nvme_set_options", 00:07:52.161 "params": { 00:07:52.161 "action_on_timeout": "none", 00:07:52.161 "timeout_us": 0, 00:07:52.161 "timeout_admin_us": 0, 00:07:52.161 "keep_alive_timeout_ms": 10000, 00:07:52.161 "arbitration_burst": 0, 00:07:52.161 "low_priority_weight": 0, 00:07:52.161 "medium_priority_weight": 0, 00:07:52.161 "high_priority_weight": 0, 00:07:52.161 "nvme_adminq_poll_period_us": 10000, 00:07:52.161 "nvme_ioq_poll_period_us": 0, 00:07:52.161 "io_queue_requests": 0, 00:07:52.161 "delay_cmd_submit": true, 00:07:52.161 "transport_retry_count": 4, 00:07:52.161 "bdev_retry_count": 3, 00:07:52.161 "transport_ack_timeout": 0, 00:07:52.161 "ctrlr_loss_timeout_sec": 0, 00:07:52.161 "reconnect_delay_sec": 0, 00:07:52.161 "fast_io_fail_timeout_sec": 0, 00:07:52.161 "disable_auto_failback": false, 00:07:52.161 "generate_uuids": false, 00:07:52.161 "transport_tos": 0, 00:07:52.161 "nvme_error_stat": false, 00:07:52.161 "rdma_srq_size": 0, 00:07:52.161 "io_path_stat": false, 00:07:52.161 "allow_accel_sequence": false, 00:07:52.161 "rdma_max_cq_size": 0, 00:07:52.161 "rdma_cm_event_timeout_ms": 0, 00:07:52.161 "dhchap_digests": [ 00:07:52.161 "sha256", 00:07:52.161 "sha384", 00:07:52.161 "sha512" 00:07:52.161 ], 00:07:52.161 "dhchap_dhgroups": [ 00:07:52.161 "null", 00:07:52.161 "ffdhe2048", 00:07:52.161 "ffdhe3072", 00:07:52.161 "ffdhe4096", 00:07:52.161 "ffdhe6144", 00:07:52.161 "ffdhe8192" 00:07:52.161 ] 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "bdev_nvme_set_hotplug", 00:07:52.161 "params": { 00:07:52.161 "period_us": 100000, 00:07:52.161 "enable": false 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "bdev_wait_for_examine" 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "scsi", 00:07:52.161 "config": null 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "scheduler", 00:07:52.161 "config": [ 00:07:52.161 { 00:07:52.161 "method": "framework_set_scheduler", 00:07:52.161 "params": { 00:07:52.161 "name": "static" 00:07:52.161 } 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "vhost_scsi", 00:07:52.161 "config": [] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "vhost_blk", 00:07:52.161 "config": [] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "ublk", 00:07:52.161 "config": [] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "nbd", 00:07:52.161 "config": [] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "nvmf", 00:07:52.161 "config": [ 00:07:52.161 { 00:07:52.161 "method": "nvmf_set_config", 00:07:52.161 "params": { 00:07:52.161 "discovery_filter": "match_any", 00:07:52.161 "admin_cmd_passthru": { 00:07:52.161 "identify_ctrlr": false 00:07:52.161 }, 00:07:52.161 "dhchap_digests": [ 00:07:52.161 "sha256", 00:07:52.161 "sha384", 00:07:52.161 "sha512" 00:07:52.161 ], 00:07:52.161 "dhchap_dhgroups": [ 00:07:52.161 "null", 00:07:52.161 "ffdhe2048", 00:07:52.161 "ffdhe3072", 00:07:52.161 "ffdhe4096", 00:07:52.161 "ffdhe6144", 00:07:52.161 "ffdhe8192" 00:07:52.161 ] 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "nvmf_set_max_subsystems", 00:07:52.161 "params": { 00:07:52.161 "max_subsystems": 1024 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "nvmf_set_crdt", 00:07:52.161 "params": { 00:07:52.161 "crdt1": 0, 00:07:52.161 "crdt2": 0, 00:07:52.161 "crdt3": 0 00:07:52.161 } 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "method": "nvmf_create_transport", 00:07:52.161 "params": { 00:07:52.161 "trtype": "TCP", 00:07:52.161 "max_queue_depth": 128, 00:07:52.161 "max_io_qpairs_per_ctrlr": 127, 00:07:52.161 "in_capsule_data_size": 4096, 00:07:52.161 "max_io_size": 131072, 00:07:52.161 "io_unit_size": 131072, 00:07:52.161 "max_aq_depth": 128, 00:07:52.161 "num_shared_buffers": 511, 00:07:52.161 "buf_cache_size": 4294967295, 00:07:52.161 "dif_insert_or_strip": false, 00:07:52.161 "zcopy": false, 00:07:52.161 "c2h_success": true, 00:07:52.161 "sock_priority": 0, 00:07:52.161 "abort_timeout_sec": 1, 00:07:52.161 "ack_timeout": 0, 00:07:52.161 "data_wr_pool_size": 0 00:07:52.161 } 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 }, 00:07:52.161 { 00:07:52.161 "subsystem": "iscsi", 00:07:52.161 "config": [ 00:07:52.161 { 00:07:52.161 "method": "iscsi_set_options", 00:07:52.161 "params": { 00:07:52.161 "node_base": "iqn.2016-06.io.spdk", 00:07:52.161 "max_sessions": 128, 00:07:52.161 "max_connections_per_session": 2, 00:07:52.161 "max_queue_depth": 64, 00:07:52.161 "default_time2wait": 2, 00:07:52.161 "default_time2retain": 20, 00:07:52.161 "first_burst_length": 8192, 00:07:52.161 "immediate_data": true, 00:07:52.161 "allow_duplicated_isid": false, 00:07:52.161 "error_recovery_level": 0, 00:07:52.161 "nop_timeout": 60, 00:07:52.161 "nop_in_interval": 30, 00:07:52.161 "disable_chap": false, 00:07:52.161 "require_chap": false, 00:07:52.161 "mutual_chap": false, 00:07:52.161 "chap_group": 0, 00:07:52.161 "max_large_datain_per_connection": 64, 00:07:52.161 "max_r2t_per_connection": 4, 00:07:52.161 "pdu_pool_size": 36864, 00:07:52.161 "immediate_data_pool_size": 16384, 00:07:52.161 "data_out_pool_size": 2048 00:07:52.161 } 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 } 00:07:52.161 ] 00:07:52.161 } 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58309 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58309 ']' 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58309 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58309 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.161 killing process with pid 58309 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58309' 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58309 00:07:52.161 09:04:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58309 00:07:54.696 09:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58365 00:07:54.696 09:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:54.696 09:04:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58365 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58365 ']' 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58365 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58365 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.984 killing process with pid 58365 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58365' 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58365 00:07:59.984 09:04:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58365 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:01.886 00:08:01.886 real 0m11.230s 00:08:01.886 user 0m10.406s 00:08:01.886 sys 0m1.264s 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.886 ************************************ 00:08:01.886 END TEST skip_rpc_with_json 00:08:01.886 ************************************ 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:01.886 09:04:56 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:01.886 09:04:56 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.886 09:04:56 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.886 09:04:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.886 ************************************ 00:08:01.886 START TEST skip_rpc_with_delay 00:08:01.886 ************************************ 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:01.886 09:04:56 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:02.146 [2024-11-20 09:04:57.010415] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:02.146 09:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:02.146 09:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.146 09:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:02.146 09:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.146 00:08:02.146 real 0m0.221s 00:08:02.146 user 0m0.118s 00:08:02.146 sys 0m0.101s 00:08:02.146 09:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.146 09:04:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:02.146 ************************************ 00:08:02.146 END TEST skip_rpc_with_delay 00:08:02.146 ************************************ 00:08:02.146 09:04:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:02.146 09:04:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:02.146 09:04:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:02.146 09:04:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.146 09:04:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.146 09:04:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.146 ************************************ 00:08:02.146 START TEST exit_on_failed_rpc_init 00:08:02.146 ************************************ 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58494 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58494 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58494 ']' 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.146 09:04:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:02.405 [2024-11-20 09:04:57.281095] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:02.405 [2024-11-20 09:04:57.281365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58494 ] 00:08:02.405 [2024-11-20 09:04:57.470957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.664 [2024-11-20 09:04:57.619533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:03.624 09:04:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:03.884 [2024-11-20 09:04:58.747567] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:03.884 [2024-11-20 09:04:58.747784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58523 ] 00:08:03.884 [2024-11-20 09:04:58.947299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.143 [2024-11-20 09:04:59.119177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.143 [2024-11-20 09:04:59.119369] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:04.143 [2024-11-20 09:04:59.119396] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:04.143 [2024-11-20 09:04:59.119445] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58494 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58494 ']' 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58494 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58494 00:08:04.402 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.402 killing process with pid 58494 00:08:04.403 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.403 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58494' 00:08:04.403 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58494 00:08:04.403 09:04:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58494 00:08:06.938 00:08:06.938 real 0m4.735s 00:08:06.938 user 0m5.124s 00:08:06.938 sys 0m0.875s 00:08:06.938 09:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.938 09:05:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:06.938 ************************************ 00:08:06.938 END TEST exit_on_failed_rpc_init 00:08:06.938 ************************************ 00:08:06.938 09:05:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:06.938 00:08:06.938 real 0m24.011s 00:08:06.938 user 0m22.628s 00:08:06.938 sys 0m2.955s 00:08:06.938 09:05:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.938 09:05:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.938 ************************************ 00:08:06.938 END TEST skip_rpc 00:08:06.938 ************************************ 00:08:06.938 09:05:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:06.938 09:05:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.938 09:05:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.938 09:05:01 -- common/autotest_common.sh@10 -- # set +x 00:08:06.938 ************************************ 00:08:06.938 START TEST rpc_client 00:08:06.938 ************************************ 00:08:06.938 09:05:01 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:07.198 * Looking for test storage... 00:08:07.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.198 09:05:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.198 --rc genhtml_branch_coverage=1 00:08:07.198 --rc genhtml_function_coverage=1 00:08:07.198 --rc genhtml_legend=1 00:08:07.198 --rc geninfo_all_blocks=1 00:08:07.198 --rc geninfo_unexecuted_blocks=1 00:08:07.198 00:08:07.198 ' 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.198 --rc genhtml_branch_coverage=1 00:08:07.198 --rc genhtml_function_coverage=1 00:08:07.198 --rc genhtml_legend=1 00:08:07.198 --rc geninfo_all_blocks=1 00:08:07.198 --rc geninfo_unexecuted_blocks=1 00:08:07.198 00:08:07.198 ' 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.198 --rc genhtml_branch_coverage=1 00:08:07.198 --rc genhtml_function_coverage=1 00:08:07.198 --rc genhtml_legend=1 00:08:07.198 --rc geninfo_all_blocks=1 00:08:07.198 --rc geninfo_unexecuted_blocks=1 00:08:07.198 00:08:07.198 ' 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.198 --rc genhtml_branch_coverage=1 00:08:07.198 --rc genhtml_function_coverage=1 00:08:07.198 --rc genhtml_legend=1 00:08:07.198 --rc geninfo_all_blocks=1 00:08:07.198 --rc geninfo_unexecuted_blocks=1 00:08:07.198 00:08:07.198 ' 00:08:07.198 09:05:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:07.198 OK 00:08:07.198 09:05:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:07.198 00:08:07.198 real 0m0.264s 00:08:07.198 user 0m0.161s 00:08:07.198 sys 0m0.114s 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.198 09:05:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:07.198 ************************************ 00:08:07.198 END TEST rpc_client 00:08:07.198 ************************************ 00:08:07.198 09:05:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:07.198 09:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.198 09:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.198 09:05:02 -- common/autotest_common.sh@10 -- # set +x 00:08:07.198 ************************************ 00:08:07.198 START TEST json_config 00:08:07.198 ************************************ 00:08:07.198 09:05:02 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:07.507 09:05:02 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.507 09:05:02 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.507 09:05:02 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.507 09:05:02 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.507 09:05:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.507 09:05:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.507 09:05:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.507 09:05:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.507 09:05:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.507 09:05:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.507 09:05:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.507 09:05:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.507 09:05:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.507 09:05:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.507 09:05:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.507 09:05:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:07.507 09:05:02 json_config -- scripts/common.sh@345 -- # : 1 00:08:07.507 09:05:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.507 09:05:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.508 09:05:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:07.508 09:05:02 json_config -- scripts/common.sh@353 -- # local d=1 00:08:07.508 09:05:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.508 09:05:02 json_config -- scripts/common.sh@355 -- # echo 1 00:08:07.508 09:05:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.508 09:05:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:07.508 09:05:02 json_config -- scripts/common.sh@353 -- # local d=2 00:08:07.508 09:05:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.508 09:05:02 json_config -- scripts/common.sh@355 -- # echo 2 00:08:07.508 09:05:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.508 09:05:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.508 09:05:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.508 09:05:02 json_config -- scripts/common.sh@368 -- # return 0 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.508 --rc genhtml_branch_coverage=1 00:08:07.508 --rc genhtml_function_coverage=1 00:08:07.508 --rc genhtml_legend=1 00:08:07.508 --rc geninfo_all_blocks=1 00:08:07.508 --rc geninfo_unexecuted_blocks=1 00:08:07.508 00:08:07.508 ' 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.508 --rc genhtml_branch_coverage=1 00:08:07.508 --rc genhtml_function_coverage=1 00:08:07.508 --rc genhtml_legend=1 00:08:07.508 --rc geninfo_all_blocks=1 00:08:07.508 --rc geninfo_unexecuted_blocks=1 00:08:07.508 00:08:07.508 ' 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.508 --rc genhtml_branch_coverage=1 00:08:07.508 --rc genhtml_function_coverage=1 00:08:07.508 --rc genhtml_legend=1 00:08:07.508 --rc geninfo_all_blocks=1 00:08:07.508 --rc geninfo_unexecuted_blocks=1 00:08:07.508 00:08:07.508 ' 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.508 --rc genhtml_branch_coverage=1 00:08:07.508 --rc genhtml_function_coverage=1 00:08:07.508 --rc genhtml_legend=1 00:08:07.508 --rc geninfo_all_blocks=1 00:08:07.508 --rc geninfo_unexecuted_blocks=1 00:08:07.508 00:08:07.508 ' 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f672431-7bc3-4680-b192-759d7bcf00f3 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4f672431-7bc3-4680-b192-759d7bcf00f3 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.508 09:05:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.508 09:05:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.508 09:05:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.508 09:05:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.508 09:05:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.508 09:05:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.508 09:05:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.508 09:05:02 json_config -- paths/export.sh@5 -- # export PATH 00:08:07.508 09:05:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@51 -- # : 0 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.508 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.508 09:05:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:07.508 WARNING: No tests are enabled so not running JSON configuration tests 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:07.508 09:05:02 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:07.508 00:08:07.508 real 0m0.206s 00:08:07.508 user 0m0.139s 00:08:07.508 sys 0m0.071s 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.508 09:05:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.508 ************************************ 00:08:07.508 END TEST json_config 00:08:07.508 ************************************ 00:08:07.508 09:05:02 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:07.508 09:05:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.508 09:05:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.508 09:05:02 -- common/autotest_common.sh@10 -- # set +x 00:08:07.508 ************************************ 00:08:07.508 START TEST json_config_extra_key 00:08:07.509 ************************************ 00:08:07.509 09:05:02 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:07.509 09:05:02 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.509 09:05:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.509 09:05:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.769 09:05:02 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.769 09:05:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:07.769 09:05:02 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.769 09:05:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.769 --rc genhtml_branch_coverage=1 00:08:07.769 --rc genhtml_function_coverage=1 00:08:07.769 --rc genhtml_legend=1 00:08:07.769 --rc geninfo_all_blocks=1 00:08:07.769 --rc geninfo_unexecuted_blocks=1 00:08:07.769 00:08:07.769 ' 00:08:07.769 09:05:02 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.769 --rc genhtml_branch_coverage=1 00:08:07.769 --rc genhtml_function_coverage=1 00:08:07.769 --rc genhtml_legend=1 00:08:07.769 --rc geninfo_all_blocks=1 00:08:07.770 --rc geninfo_unexecuted_blocks=1 00:08:07.770 00:08:07.770 ' 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.770 --rc genhtml_branch_coverage=1 00:08:07.770 --rc genhtml_function_coverage=1 00:08:07.770 --rc genhtml_legend=1 00:08:07.770 --rc geninfo_all_blocks=1 00:08:07.770 --rc geninfo_unexecuted_blocks=1 00:08:07.770 00:08:07.770 ' 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.770 --rc genhtml_branch_coverage=1 00:08:07.770 --rc genhtml_function_coverage=1 00:08:07.770 --rc genhtml_legend=1 00:08:07.770 --rc geninfo_all_blocks=1 00:08:07.770 --rc geninfo_unexecuted_blocks=1 00:08:07.770 00:08:07.770 ' 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4f672431-7bc3-4680-b192-759d7bcf00f3 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4f672431-7bc3-4680-b192-759d7bcf00f3 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.770 09:05:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.770 09:05:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.770 09:05:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.770 09:05:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.770 09:05:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.770 09:05:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.770 09:05:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.770 09:05:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:07.770 09:05:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.770 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.770 09:05:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:07.770 INFO: launching applications... 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:07.770 09:05:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58732 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:07.770 Waiting for target to run... 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:07.770 09:05:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58732 /var/tmp/spdk_tgt.sock 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58732 ']' 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:07.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.770 09:05:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:08.030 [2024-11-20 09:05:02.887738] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:08.030 [2024-11-20 09:05:02.887984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58732 ] 00:08:08.598 [2024-11-20 09:05:03.522904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.598 [2024-11-20 09:05:03.660195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.535 00:08:09.535 INFO: shutting down applications... 00:08:09.535 09:05:04 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.535 09:05:04 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:09.535 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:09.535 09:05:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58732 ]] 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58732 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:09.535 09:05:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:09.795 09:05:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:09.795 09:05:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.795 09:05:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:09.795 09:05:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:10.363 09:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:10.363 09:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.363 09:05:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:10.363 09:05:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:10.929 09:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:10.929 09:05:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.929 09:05:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:10.929 09:05:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:11.506 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:11.506 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:11.506 09:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:11.506 09:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:11.781 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:11.781 09:05:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:11.781 09:05:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:11.781 09:05:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58732 00:08:12.350 SPDK target shutdown done 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:12.350 09:05:07 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:12.350 Success 00:08:12.350 09:05:07 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:12.350 ************************************ 00:08:12.350 END TEST json_config_extra_key 00:08:12.350 ************************************ 00:08:12.350 00:08:12.350 real 0m4.827s 00:08:12.350 user 0m4.414s 00:08:12.350 sys 0m0.938s 00:08:12.350 09:05:07 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.350 09:05:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:12.350 09:05:07 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:12.350 09:05:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:12.350 09:05:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.350 09:05:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.350 ************************************ 00:08:12.350 START TEST alias_rpc 00:08:12.350 ************************************ 00:08:12.350 09:05:07 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:12.610 * Looking for test storage... 00:08:12.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.610 09:05:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.610 --rc genhtml_branch_coverage=1 00:08:12.610 --rc genhtml_function_coverage=1 00:08:12.610 --rc genhtml_legend=1 00:08:12.610 --rc geninfo_all_blocks=1 00:08:12.610 --rc geninfo_unexecuted_blocks=1 00:08:12.610 00:08:12.610 ' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.610 --rc genhtml_branch_coverage=1 00:08:12.610 --rc genhtml_function_coverage=1 00:08:12.610 --rc genhtml_legend=1 00:08:12.610 --rc geninfo_all_blocks=1 00:08:12.610 --rc geninfo_unexecuted_blocks=1 00:08:12.610 00:08:12.610 ' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.610 --rc genhtml_branch_coverage=1 00:08:12.610 --rc genhtml_function_coverage=1 00:08:12.610 --rc genhtml_legend=1 00:08:12.610 --rc geninfo_all_blocks=1 00:08:12.610 --rc geninfo_unexecuted_blocks=1 00:08:12.610 00:08:12.610 ' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.610 --rc genhtml_branch_coverage=1 00:08:12.610 --rc genhtml_function_coverage=1 00:08:12.610 --rc genhtml_legend=1 00:08:12.610 --rc geninfo_all_blocks=1 00:08:12.610 --rc geninfo_unexecuted_blocks=1 00:08:12.610 00:08:12.610 ' 00:08:12.610 09:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:12.610 09:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58844 00:08:12.610 09:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:12.610 09:05:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58844 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58844 ']' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.610 09:05:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.869 [2024-11-20 09:05:07.799177] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:12.869 [2024-11-20 09:05:07.799409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:08:13.128 [2024-11-20 09:05:07.997265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.128 [2024-11-20 09:05:08.169629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.122 09:05:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.122 09:05:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:14.122 09:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:14.688 09:05:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58844 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58844 ']' 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58844 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58844 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.688 killing process with pid 58844 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58844' 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@973 -- # kill 58844 00:08:14.688 09:05:09 alias_rpc -- common/autotest_common.sh@978 -- # wait 58844 00:08:17.221 00:08:17.221 real 0m4.631s 00:08:17.221 user 0m4.646s 00:08:17.221 sys 0m0.886s 00:08:17.221 09:05:12 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.221 09:05:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.221 ************************************ 00:08:17.221 END TEST alias_rpc 00:08:17.221 ************************************ 00:08:17.221 09:05:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:17.221 09:05:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:17.221 09:05:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.221 09:05:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.221 09:05:12 -- common/autotest_common.sh@10 -- # set +x 00:08:17.221 ************************************ 00:08:17.221 START TEST spdkcli_tcp 00:08:17.221 ************************************ 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:17.221 * Looking for test storage... 00:08:17.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.221 09:05:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.221 --rc genhtml_branch_coverage=1 00:08:17.221 --rc genhtml_function_coverage=1 00:08:17.221 --rc genhtml_legend=1 00:08:17.221 --rc geninfo_all_blocks=1 00:08:17.221 --rc geninfo_unexecuted_blocks=1 00:08:17.221 00:08:17.221 ' 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.221 --rc genhtml_branch_coverage=1 00:08:17.221 --rc genhtml_function_coverage=1 00:08:17.221 --rc genhtml_legend=1 00:08:17.221 --rc geninfo_all_blocks=1 00:08:17.221 --rc geninfo_unexecuted_blocks=1 00:08:17.221 00:08:17.221 ' 00:08:17.221 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.221 --rc genhtml_branch_coverage=1 00:08:17.221 --rc genhtml_function_coverage=1 00:08:17.221 --rc genhtml_legend=1 00:08:17.222 --rc geninfo_all_blocks=1 00:08:17.222 --rc geninfo_unexecuted_blocks=1 00:08:17.222 00:08:17.222 ' 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.222 --rc genhtml_branch_coverage=1 00:08:17.222 --rc genhtml_function_coverage=1 00:08:17.222 --rc genhtml_legend=1 00:08:17.222 --rc geninfo_all_blocks=1 00:08:17.222 --rc geninfo_unexecuted_blocks=1 00:08:17.222 00:08:17.222 ' 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58957 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58957 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58957 ']' 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.222 09:05:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.222 09:05:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.481 [2024-11-20 09:05:12.490512] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:17.481 [2024-11-20 09:05:12.490761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:08:17.739 [2024-11-20 09:05:12.685839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.739 [2024-11-20 09:05:12.833547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.739 [2024-11-20 09:05:12.833566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.674 09:05:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.674 09:05:13 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:18.674 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58974 00:08:18.674 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:18.674 09:05:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:18.933 [ 00:08:18.933 "bdev_malloc_delete", 00:08:18.933 "bdev_malloc_create", 00:08:18.933 "bdev_null_resize", 00:08:18.933 "bdev_null_delete", 00:08:18.933 "bdev_null_create", 00:08:18.933 "bdev_nvme_cuse_unregister", 00:08:18.933 "bdev_nvme_cuse_register", 00:08:18.933 "bdev_opal_new_user", 00:08:18.933 "bdev_opal_set_lock_state", 00:08:18.933 "bdev_opal_delete", 00:08:18.933 "bdev_opal_get_info", 00:08:18.933 "bdev_opal_create", 00:08:18.933 "bdev_nvme_opal_revert", 00:08:18.933 "bdev_nvme_opal_init", 00:08:18.933 "bdev_nvme_send_cmd", 00:08:18.933 "bdev_nvme_set_keys", 00:08:18.933 "bdev_nvme_get_path_iostat", 00:08:18.933 "bdev_nvme_get_mdns_discovery_info", 00:08:18.933 "bdev_nvme_stop_mdns_discovery", 00:08:18.933 "bdev_nvme_start_mdns_discovery", 00:08:18.933 "bdev_nvme_set_multipath_policy", 00:08:18.933 "bdev_nvme_set_preferred_path", 00:08:18.933 "bdev_nvme_get_io_paths", 00:08:18.933 "bdev_nvme_remove_error_injection", 00:08:18.933 "bdev_nvme_add_error_injection", 00:08:18.933 "bdev_nvme_get_discovery_info", 00:08:18.933 "bdev_nvme_stop_discovery", 00:08:18.933 "bdev_nvme_start_discovery", 00:08:18.933 "bdev_nvme_get_controller_health_info", 00:08:18.933 "bdev_nvme_disable_controller", 00:08:18.933 "bdev_nvme_enable_controller", 00:08:18.933 "bdev_nvme_reset_controller", 00:08:18.933 "bdev_nvme_get_transport_statistics", 00:08:18.933 "bdev_nvme_apply_firmware", 00:08:18.933 "bdev_nvme_detach_controller", 00:08:18.933 "bdev_nvme_get_controllers", 00:08:18.933 "bdev_nvme_attach_controller", 00:08:18.933 "bdev_nvme_set_hotplug", 00:08:18.933 "bdev_nvme_set_options", 00:08:18.933 "bdev_passthru_delete", 00:08:18.933 "bdev_passthru_create", 00:08:18.933 "bdev_lvol_set_parent_bdev", 00:08:18.933 "bdev_lvol_set_parent", 00:08:18.933 "bdev_lvol_check_shallow_copy", 00:08:18.933 "bdev_lvol_start_shallow_copy", 00:08:18.933 "bdev_lvol_grow_lvstore", 00:08:18.933 "bdev_lvol_get_lvols", 00:08:18.933 "bdev_lvol_get_lvstores", 00:08:18.933 "bdev_lvol_delete", 00:08:18.933 "bdev_lvol_set_read_only", 00:08:18.933 "bdev_lvol_resize", 00:08:18.933 "bdev_lvol_decouple_parent", 00:08:18.933 "bdev_lvol_inflate", 00:08:18.933 "bdev_lvol_rename", 00:08:18.933 "bdev_lvol_clone_bdev", 00:08:18.933 "bdev_lvol_clone", 00:08:18.933 "bdev_lvol_snapshot", 00:08:18.933 "bdev_lvol_create", 00:08:18.933 "bdev_lvol_delete_lvstore", 00:08:18.933 "bdev_lvol_rename_lvstore", 00:08:18.933 "bdev_lvol_create_lvstore", 00:08:18.933 "bdev_raid_set_options", 00:08:18.933 "bdev_raid_remove_base_bdev", 00:08:18.933 "bdev_raid_add_base_bdev", 00:08:18.933 "bdev_raid_delete", 00:08:18.933 "bdev_raid_create", 00:08:18.933 "bdev_raid_get_bdevs", 00:08:18.933 "bdev_error_inject_error", 00:08:18.933 "bdev_error_delete", 00:08:18.933 "bdev_error_create", 00:08:18.933 "bdev_split_delete", 00:08:18.933 "bdev_split_create", 00:08:18.933 "bdev_delay_delete", 00:08:18.933 "bdev_delay_create", 00:08:18.933 "bdev_delay_update_latency", 00:08:18.933 "bdev_zone_block_delete", 00:08:18.933 "bdev_zone_block_create", 00:08:18.933 "blobfs_create", 00:08:18.933 "blobfs_detect", 00:08:18.933 "blobfs_set_cache_size", 00:08:18.933 "bdev_xnvme_delete", 00:08:18.933 "bdev_xnvme_create", 00:08:18.933 "bdev_aio_delete", 00:08:18.933 "bdev_aio_rescan", 00:08:18.933 "bdev_aio_create", 00:08:18.933 "bdev_ftl_set_property", 00:08:18.933 "bdev_ftl_get_properties", 00:08:18.933 "bdev_ftl_get_stats", 00:08:18.933 "bdev_ftl_unmap", 00:08:18.933 "bdev_ftl_unload", 00:08:18.933 "bdev_ftl_delete", 00:08:18.933 "bdev_ftl_load", 00:08:18.933 "bdev_ftl_create", 00:08:18.933 "bdev_virtio_attach_controller", 00:08:18.933 "bdev_virtio_scsi_get_devices", 00:08:18.933 "bdev_virtio_detach_controller", 00:08:18.933 "bdev_virtio_blk_set_hotplug", 00:08:18.933 "bdev_iscsi_delete", 00:08:18.933 "bdev_iscsi_create", 00:08:18.933 "bdev_iscsi_set_options", 00:08:18.933 "accel_error_inject_error", 00:08:18.933 "ioat_scan_accel_module", 00:08:18.933 "dsa_scan_accel_module", 00:08:18.933 "iaa_scan_accel_module", 00:08:18.933 "keyring_file_remove_key", 00:08:18.933 "keyring_file_add_key", 00:08:18.933 "keyring_linux_set_options", 00:08:18.933 "fsdev_aio_delete", 00:08:18.933 "fsdev_aio_create", 00:08:18.933 "iscsi_get_histogram", 00:08:18.933 "iscsi_enable_histogram", 00:08:18.933 "iscsi_set_options", 00:08:18.933 "iscsi_get_auth_groups", 00:08:18.933 "iscsi_auth_group_remove_secret", 00:08:18.933 "iscsi_auth_group_add_secret", 00:08:18.933 "iscsi_delete_auth_group", 00:08:18.933 "iscsi_create_auth_group", 00:08:18.933 "iscsi_set_discovery_auth", 00:08:18.933 "iscsi_get_options", 00:08:18.933 "iscsi_target_node_request_logout", 00:08:18.933 "iscsi_target_node_set_redirect", 00:08:18.933 "iscsi_target_node_set_auth", 00:08:18.933 "iscsi_target_node_add_lun", 00:08:18.933 "iscsi_get_stats", 00:08:18.933 "iscsi_get_connections", 00:08:18.933 "iscsi_portal_group_set_auth", 00:08:18.933 "iscsi_start_portal_group", 00:08:18.933 "iscsi_delete_portal_group", 00:08:18.933 "iscsi_create_portal_group", 00:08:18.933 "iscsi_get_portal_groups", 00:08:18.933 "iscsi_delete_target_node", 00:08:18.933 "iscsi_target_node_remove_pg_ig_maps", 00:08:18.933 "iscsi_target_node_add_pg_ig_maps", 00:08:18.933 "iscsi_create_target_node", 00:08:18.933 "iscsi_get_target_nodes", 00:08:18.933 "iscsi_delete_initiator_group", 00:08:18.933 "iscsi_initiator_group_remove_initiators", 00:08:18.933 "iscsi_initiator_group_add_initiators", 00:08:18.933 "iscsi_create_initiator_group", 00:08:18.933 "iscsi_get_initiator_groups", 00:08:18.933 "nvmf_set_crdt", 00:08:18.933 "nvmf_set_config", 00:08:18.933 "nvmf_set_max_subsystems", 00:08:18.933 "nvmf_stop_mdns_prr", 00:08:18.933 "nvmf_publish_mdns_prr", 00:08:18.933 "nvmf_subsystem_get_listeners", 00:08:18.933 "nvmf_subsystem_get_qpairs", 00:08:18.933 "nvmf_subsystem_get_controllers", 00:08:18.933 "nvmf_get_stats", 00:08:18.933 "nvmf_get_transports", 00:08:18.933 "nvmf_create_transport", 00:08:18.933 "nvmf_get_targets", 00:08:18.933 "nvmf_delete_target", 00:08:18.933 "nvmf_create_target", 00:08:18.933 "nvmf_subsystem_allow_any_host", 00:08:18.933 "nvmf_subsystem_set_keys", 00:08:18.933 "nvmf_subsystem_remove_host", 00:08:18.933 "nvmf_subsystem_add_host", 00:08:18.933 "nvmf_ns_remove_host", 00:08:18.933 "nvmf_ns_add_host", 00:08:18.933 "nvmf_subsystem_remove_ns", 00:08:18.933 "nvmf_subsystem_set_ns_ana_group", 00:08:18.933 "nvmf_subsystem_add_ns", 00:08:18.933 "nvmf_subsystem_listener_set_ana_state", 00:08:18.933 "nvmf_discovery_get_referrals", 00:08:18.933 "nvmf_discovery_remove_referral", 00:08:18.933 "nvmf_discovery_add_referral", 00:08:18.933 "nvmf_subsystem_remove_listener", 00:08:18.933 "nvmf_subsystem_add_listener", 00:08:18.933 "nvmf_delete_subsystem", 00:08:18.933 "nvmf_create_subsystem", 00:08:18.933 "nvmf_get_subsystems", 00:08:18.933 "env_dpdk_get_mem_stats", 00:08:18.933 "nbd_get_disks", 00:08:18.933 "nbd_stop_disk", 00:08:18.933 "nbd_start_disk", 00:08:18.933 "ublk_recover_disk", 00:08:18.933 "ublk_get_disks", 00:08:18.933 "ublk_stop_disk", 00:08:18.933 "ublk_start_disk", 00:08:18.933 "ublk_destroy_target", 00:08:18.933 "ublk_create_target", 00:08:18.933 "virtio_blk_create_transport", 00:08:18.933 "virtio_blk_get_transports", 00:08:18.933 "vhost_controller_set_coalescing", 00:08:18.933 "vhost_get_controllers", 00:08:18.933 "vhost_delete_controller", 00:08:18.933 "vhost_create_blk_controller", 00:08:18.933 "vhost_scsi_controller_remove_target", 00:08:18.933 "vhost_scsi_controller_add_target", 00:08:18.933 "vhost_start_scsi_controller", 00:08:18.933 "vhost_create_scsi_controller", 00:08:18.933 "thread_set_cpumask", 00:08:18.933 "scheduler_set_options", 00:08:18.933 "framework_get_governor", 00:08:18.933 "framework_get_scheduler", 00:08:18.933 "framework_set_scheduler", 00:08:18.933 "framework_get_reactors", 00:08:18.933 "thread_get_io_channels", 00:08:18.933 "thread_get_pollers", 00:08:18.933 "thread_get_stats", 00:08:18.933 "framework_monitor_context_switch", 00:08:18.933 "spdk_kill_instance", 00:08:18.933 "log_enable_timestamps", 00:08:18.933 "log_get_flags", 00:08:18.933 "log_clear_flag", 00:08:18.933 "log_set_flag", 00:08:18.933 "log_get_level", 00:08:18.933 "log_set_level", 00:08:18.933 "log_get_print_level", 00:08:18.933 "log_set_print_level", 00:08:18.933 "framework_enable_cpumask_locks", 00:08:18.933 "framework_disable_cpumask_locks", 00:08:18.933 "framework_wait_init", 00:08:18.933 "framework_start_init", 00:08:18.933 "scsi_get_devices", 00:08:18.933 "bdev_get_histogram", 00:08:18.933 "bdev_enable_histogram", 00:08:18.933 "bdev_set_qos_limit", 00:08:18.933 "bdev_set_qd_sampling_period", 00:08:18.933 "bdev_get_bdevs", 00:08:18.933 "bdev_reset_iostat", 00:08:18.933 "bdev_get_iostat", 00:08:18.933 "bdev_examine", 00:08:18.933 "bdev_wait_for_examine", 00:08:18.933 "bdev_set_options", 00:08:18.933 "accel_get_stats", 00:08:18.933 "accel_set_options", 00:08:18.934 "accel_set_driver", 00:08:18.934 "accel_crypto_key_destroy", 00:08:18.934 "accel_crypto_keys_get", 00:08:18.934 "accel_crypto_key_create", 00:08:18.934 "accel_assign_opc", 00:08:18.934 "accel_get_module_info", 00:08:18.934 "accel_get_opc_assignments", 00:08:18.934 "vmd_rescan", 00:08:18.934 "vmd_remove_device", 00:08:18.934 "vmd_enable", 00:08:18.934 "sock_get_default_impl", 00:08:18.934 "sock_set_default_impl", 00:08:18.934 "sock_impl_set_options", 00:08:18.934 "sock_impl_get_options", 00:08:18.934 "iobuf_get_stats", 00:08:18.934 "iobuf_set_options", 00:08:18.934 "keyring_get_keys", 00:08:18.934 "framework_get_pci_devices", 00:08:18.934 "framework_get_config", 00:08:18.934 "framework_get_subsystems", 00:08:18.934 "fsdev_set_opts", 00:08:18.934 "fsdev_get_opts", 00:08:18.934 "trace_get_info", 00:08:18.934 "trace_get_tpoint_group_mask", 00:08:18.934 "trace_disable_tpoint_group", 00:08:18.934 "trace_enable_tpoint_group", 00:08:18.934 "trace_clear_tpoint_mask", 00:08:18.934 "trace_set_tpoint_mask", 00:08:18.934 "notify_get_notifications", 00:08:18.934 "notify_get_types", 00:08:18.934 "spdk_get_version", 00:08:18.934 "rpc_get_methods" 00:08:18.934 ] 00:08:18.934 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:18.934 09:05:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.934 09:05:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.192 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:19.192 09:05:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58957 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58957 ']' 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58957 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58957 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.192 killing process with pid 58957 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58957' 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58957 00:08:19.192 09:05:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58957 00:08:21.725 00:08:21.725 real 0m4.357s 00:08:21.725 user 0m7.842s 00:08:21.725 sys 0m0.775s 00:08:21.725 09:05:16 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.725 09:05:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.725 ************************************ 00:08:21.725 END TEST spdkcli_tcp 00:08:21.725 ************************************ 00:08:21.725 09:05:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:21.725 09:05:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.725 09:05:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.725 09:05:16 -- common/autotest_common.sh@10 -- # set +x 00:08:21.725 ************************************ 00:08:21.725 START TEST dpdk_mem_utility 00:08:21.725 ************************************ 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:21.725 * Looking for test storage... 00:08:21.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.725 09:05:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.725 --rc genhtml_branch_coverage=1 00:08:21.725 --rc genhtml_function_coverage=1 00:08:21.725 --rc genhtml_legend=1 00:08:21.725 --rc geninfo_all_blocks=1 00:08:21.725 --rc geninfo_unexecuted_blocks=1 00:08:21.725 00:08:21.725 ' 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.725 --rc genhtml_branch_coverage=1 00:08:21.725 --rc genhtml_function_coverage=1 00:08:21.725 --rc genhtml_legend=1 00:08:21.725 --rc geninfo_all_blocks=1 00:08:21.725 --rc geninfo_unexecuted_blocks=1 00:08:21.725 00:08:21.725 ' 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.725 --rc genhtml_branch_coverage=1 00:08:21.725 --rc genhtml_function_coverage=1 00:08:21.725 --rc genhtml_legend=1 00:08:21.725 --rc geninfo_all_blocks=1 00:08:21.725 --rc geninfo_unexecuted_blocks=1 00:08:21.725 00:08:21.725 ' 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.725 --rc genhtml_branch_coverage=1 00:08:21.725 --rc genhtml_function_coverage=1 00:08:21.725 --rc genhtml_legend=1 00:08:21.725 --rc geninfo_all_blocks=1 00:08:21.725 --rc geninfo_unexecuted_blocks=1 00:08:21.725 00:08:21.725 ' 00:08:21.725 09:05:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:21.725 09:05:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59079 00:08:21.725 09:05:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:21.725 09:05:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59079 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59079 ']' 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.725 09:05:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:21.984 [2024-11-20 09:05:16.864716] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:21.984 [2024-11-20 09:05:16.865110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59079 ] 00:08:21.984 [2024-11-20 09:05:17.059269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.243 [2024-11-20 09:05:17.212601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.177 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.177 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:23.177 09:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:23.177 09:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:23.177 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.177 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:23.177 { 00:08:23.177 "filename": "/tmp/spdk_mem_dump.txt" 00:08:23.177 } 00:08:23.177 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.177 09:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:23.177 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:23.177 1 heaps totaling size 816.000000 MiB 00:08:23.177 size: 816.000000 MiB heap id: 0 00:08:23.177 end heaps---------- 00:08:23.177 9 mempools totaling size 595.772034 MiB 00:08:23.177 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:23.177 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:23.177 size: 92.545471 MiB name: bdev_io_59079 00:08:23.177 size: 50.003479 MiB name: msgpool_59079 00:08:23.177 size: 36.509338 MiB name: fsdev_io_59079 00:08:23.177 size: 21.763794 MiB name: PDU_Pool 00:08:23.177 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:23.177 size: 4.133484 MiB name: evtpool_59079 00:08:23.177 size: 0.026123 MiB name: Session_Pool 00:08:23.177 end mempools------- 00:08:23.177 6 memzones totaling size 4.142822 MiB 00:08:23.177 size: 1.000366 MiB name: RG_ring_0_59079 00:08:23.177 size: 1.000366 MiB name: RG_ring_1_59079 00:08:23.177 size: 1.000366 MiB name: RG_ring_4_59079 00:08:23.177 size: 1.000366 MiB name: RG_ring_5_59079 00:08:23.177 size: 0.125366 MiB name: RG_ring_2_59079 00:08:23.177 size: 0.015991 MiB name: RG_ring_3_59079 00:08:23.177 end memzones------- 00:08:23.177 09:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:23.437 heap id: 0 total size: 816.000000 MiB number of busy elements: 313 number of free elements: 18 00:08:23.437 list of free elements. size: 16.791870 MiB 00:08:23.437 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:23.437 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:23.437 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:23.437 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:23.437 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:23.437 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:23.437 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:23.437 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:23.437 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:23.437 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:23.437 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:23.437 element at address: 0x20001ac00000 with size: 0.562439 MiB 00:08:23.437 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:23.437 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:23.437 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:23.437 element at address: 0x200012c00000 with size: 0.443237 MiB 00:08:23.437 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:23.437 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:23.437 list of standard malloc elements. size: 199.287231 MiB 00:08:23.437 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:23.437 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:23.437 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:23.437 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:23.437 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:23.437 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:23.437 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:23.437 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:23.437 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:23.437 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:23.437 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:23.437 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:23.437 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:23.438 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71780 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:23.438 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:23.438 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:23.438 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:23.439 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:23.439 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:23.439 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:23.439 list of memzone associated elements. size: 599.920898 MiB 00:08:23.439 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:23.439 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:23.439 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:23.439 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:23.439 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:23.439 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59079_0 00:08:23.439 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:23.439 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59079_0 00:08:23.439 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:23.439 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59079_0 00:08:23.439 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:23.439 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:23.439 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:23.439 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:23.439 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:23.439 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59079_0 00:08:23.439 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:23.439 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59079 00:08:23.439 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:23.439 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59079 00:08:23.439 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:23.439 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:23.439 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:23.439 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:23.439 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:23.439 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:23.439 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:23.439 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:23.439 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:23.439 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59079 00:08:23.439 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:23.440 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59079 00:08:23.440 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:23.440 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59079 00:08:23.440 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:23.440 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59079 00:08:23.440 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:23.440 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59079 00:08:23.440 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:23.440 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59079 00:08:23.440 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:23.440 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:23.440 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:23.440 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:23.440 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:23.440 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:23.440 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:23.440 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59079 00:08:23.440 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:23.440 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59079 00:08:23.440 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:23.440 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:23.440 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:23.440 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:23.440 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:23.440 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59079 00:08:23.440 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:23.440 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:23.440 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:23.440 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59079 00:08:23.440 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:23.440 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59079 00:08:23.440 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:23.440 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59079 00:08:23.440 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:23.440 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:23.440 09:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:23.440 09:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59079 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59079 ']' 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59079 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59079 00:08:23.440 killing process with pid 59079 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59079' 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59079 00:08:23.440 09:05:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59079 00:08:25.968 00:08:25.968 real 0m4.343s 00:08:25.968 user 0m4.336s 00:08:25.968 sys 0m0.706s 00:08:25.968 09:05:20 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.968 ************************************ 00:08:25.968 END TEST dpdk_mem_utility 00:08:25.968 ************************************ 00:08:25.968 09:05:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:25.968 09:05:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:25.968 09:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.968 09:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.968 09:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:25.968 ************************************ 00:08:25.968 START TEST event 00:08:25.969 ************************************ 00:08:25.969 09:05:20 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:25.969 * Looking for test storage... 00:08:25.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:25.969 09:05:21 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:25.969 09:05:21 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:25.969 09:05:21 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:26.227 09:05:21 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:26.227 09:05:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.227 09:05:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.227 09:05:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.227 09:05:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.227 09:05:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.227 09:05:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.227 09:05:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.227 09:05:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.227 09:05:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.227 09:05:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.227 09:05:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.227 09:05:21 event -- scripts/common.sh@344 -- # case "$op" in 00:08:26.227 09:05:21 event -- scripts/common.sh@345 -- # : 1 00:08:26.227 09:05:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.227 09:05:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.227 09:05:21 event -- scripts/common.sh@365 -- # decimal 1 00:08:26.227 09:05:21 event -- scripts/common.sh@353 -- # local d=1 00:08:26.227 09:05:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.227 09:05:21 event -- scripts/common.sh@355 -- # echo 1 00:08:26.227 09:05:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.227 09:05:21 event -- scripts/common.sh@366 -- # decimal 2 00:08:26.227 09:05:21 event -- scripts/common.sh@353 -- # local d=2 00:08:26.227 09:05:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.227 09:05:21 event -- scripts/common.sh@355 -- # echo 2 00:08:26.227 09:05:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.227 09:05:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.227 09:05:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.227 09:05:21 event -- scripts/common.sh@368 -- # return 0 00:08:26.227 09:05:21 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.227 09:05:21 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:26.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.227 --rc genhtml_branch_coverage=1 00:08:26.227 --rc genhtml_function_coverage=1 00:08:26.228 --rc genhtml_legend=1 00:08:26.228 --rc geninfo_all_blocks=1 00:08:26.228 --rc geninfo_unexecuted_blocks=1 00:08:26.228 00:08:26.228 ' 00:08:26.228 09:05:21 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.228 --rc genhtml_branch_coverage=1 00:08:26.228 --rc genhtml_function_coverage=1 00:08:26.228 --rc genhtml_legend=1 00:08:26.228 --rc geninfo_all_blocks=1 00:08:26.228 --rc geninfo_unexecuted_blocks=1 00:08:26.228 00:08:26.228 ' 00:08:26.228 09:05:21 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.228 --rc genhtml_branch_coverage=1 00:08:26.228 --rc genhtml_function_coverage=1 00:08:26.228 --rc genhtml_legend=1 00:08:26.228 --rc geninfo_all_blocks=1 00:08:26.228 --rc geninfo_unexecuted_blocks=1 00:08:26.228 00:08:26.228 ' 00:08:26.228 09:05:21 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.228 --rc genhtml_branch_coverage=1 00:08:26.228 --rc genhtml_function_coverage=1 00:08:26.228 --rc genhtml_legend=1 00:08:26.228 --rc geninfo_all_blocks=1 00:08:26.228 --rc geninfo_unexecuted_blocks=1 00:08:26.228 00:08:26.228 ' 00:08:26.228 09:05:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:26.228 09:05:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:26.228 09:05:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:26.228 09:05:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:26.228 09:05:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.228 09:05:21 event -- common/autotest_common.sh@10 -- # set +x 00:08:26.228 ************************************ 00:08:26.228 START TEST event_perf 00:08:26.228 ************************************ 00:08:26.228 09:05:21 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:26.228 Running I/O for 1 seconds...[2024-11-20 09:05:21.184397] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:26.228 [2024-11-20 09:05:21.184781] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59193 ] 00:08:26.544 [2024-11-20 09:05:21.366543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.544 [2024-11-20 09:05:21.527296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.544 [2024-11-20 09:05:21.527410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.544 [2024-11-20 09:05:21.527557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.544 [2024-11-20 09:05:21.527573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.921 Running I/O for 1 seconds... 00:08:27.921 lcore 0: 189793 00:08:27.921 lcore 1: 189793 00:08:27.921 lcore 2: 189794 00:08:27.921 lcore 3: 189794 00:08:27.921 done. 00:08:27.921 00:08:27.921 ************************************ 00:08:27.921 END TEST event_perf 00:08:27.921 ************************************ 00:08:27.921 real 0m1.665s 00:08:27.921 user 0m4.404s 00:08:27.921 sys 0m0.134s 00:08:27.921 09:05:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.921 09:05:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:27.921 09:05:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:27.921 09:05:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:27.921 09:05:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.921 09:05:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:27.921 ************************************ 00:08:27.921 START TEST event_reactor 00:08:27.921 ************************************ 00:08:27.921 09:05:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:27.921 [2024-11-20 09:05:22.894925] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:27.921 [2024-11-20 09:05:22.895096] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59232 ] 00:08:28.180 [2024-11-20 09:05:23.078329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.180 [2024-11-20 09:05:23.230583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.555 test_start 00:08:29.555 oneshot 00:08:29.555 tick 100 00:08:29.555 tick 100 00:08:29.555 tick 250 00:08:29.555 tick 100 00:08:29.555 tick 100 00:08:29.555 tick 100 00:08:29.555 tick 250 00:08:29.555 tick 500 00:08:29.555 tick 100 00:08:29.555 tick 100 00:08:29.555 tick 250 00:08:29.555 tick 100 00:08:29.555 tick 100 00:08:29.555 test_end 00:08:29.555 00:08:29.555 real 0m1.631s 00:08:29.555 user 0m1.412s 00:08:29.555 sys 0m0.108s 00:08:29.556 ************************************ 00:08:29.556 END TEST event_reactor 00:08:29.556 ************************************ 00:08:29.556 09:05:24 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.556 09:05:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:29.556 09:05:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:29.556 09:05:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:29.556 09:05:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.556 09:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.556 ************************************ 00:08:29.556 START TEST event_reactor_perf 00:08:29.556 ************************************ 00:08:29.556 09:05:24 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:29.556 [2024-11-20 09:05:24.589135] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:29.556 [2024-11-20 09:05:24.589297] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:08:29.814 [2024-11-20 09:05:24.780899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.072 [2024-11-20 09:05:24.932651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.446 test_start 00:08:31.446 test_end 00:08:31.446 Performance: 256297 events per second 00:08:31.446 00:08:31.446 real 0m1.644s 00:08:31.446 user 0m1.412s 00:08:31.446 sys 0m0.120s 00:08:31.446 09:05:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.446 ************************************ 00:08:31.447 END TEST event_reactor_perf 00:08:31.447 ************************************ 00:08:31.447 09:05:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:31.447 09:05:26 event -- event/event.sh@49 -- # uname -s 00:08:31.447 09:05:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:31.447 09:05:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:31.447 09:05:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.447 09:05:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.447 09:05:26 event -- common/autotest_common.sh@10 -- # set +x 00:08:31.447 ************************************ 00:08:31.447 START TEST event_scheduler 00:08:31.447 ************************************ 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:31.447 * Looking for test storage... 00:08:31.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.447 09:05:26 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.447 --rc genhtml_branch_coverage=1 00:08:31.447 --rc genhtml_function_coverage=1 00:08:31.447 --rc genhtml_legend=1 00:08:31.447 --rc geninfo_all_blocks=1 00:08:31.447 --rc geninfo_unexecuted_blocks=1 00:08:31.447 00:08:31.447 ' 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.447 --rc genhtml_branch_coverage=1 00:08:31.447 --rc genhtml_function_coverage=1 00:08:31.447 --rc genhtml_legend=1 00:08:31.447 --rc geninfo_all_blocks=1 00:08:31.447 --rc geninfo_unexecuted_blocks=1 00:08:31.447 00:08:31.447 ' 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.447 --rc genhtml_branch_coverage=1 00:08:31.447 --rc genhtml_function_coverage=1 00:08:31.447 --rc genhtml_legend=1 00:08:31.447 --rc geninfo_all_blocks=1 00:08:31.447 --rc geninfo_unexecuted_blocks=1 00:08:31.447 00:08:31.447 ' 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.447 --rc genhtml_branch_coverage=1 00:08:31.447 --rc genhtml_function_coverage=1 00:08:31.447 --rc genhtml_legend=1 00:08:31.447 --rc geninfo_all_blocks=1 00:08:31.447 --rc geninfo_unexecuted_blocks=1 00:08:31.447 00:08:31.447 ' 00:08:31.447 09:05:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:31.447 09:05:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59345 00:08:31.447 09:05:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:31.447 09:05:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.447 09:05:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59345 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59345 ']' 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.447 09:05:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:31.706 [2024-11-20 09:05:26.621864] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:31.706 [2024-11-20 09:05:26.622323] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59345 ] 00:08:31.706 [2024-11-20 09:05:26.816282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.965 [2024-11-20 09:05:26.990249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.965 [2024-11-20 09:05:26.990393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.965 [2024-11-20 09:05:26.990467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.965 [2024-11-20 09:05:26.990476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:32.531 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:32.531 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:32.531 POWER: Cannot set governor of lcore 0 to userspace 00:08:32.531 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:32.531 POWER: Cannot set governor of lcore 0 to performance 00:08:32.531 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:32.531 POWER: Cannot set governor of lcore 0 to userspace 00:08:32.531 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:32.531 POWER: Cannot set governor of lcore 0 to userspace 00:08:32.531 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:32.531 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:32.531 POWER: Unable to set Power Management Environment for lcore 0 00:08:32.531 [2024-11-20 09:05:27.612898] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:32.531 [2024-11-20 09:05:27.612927] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:32.531 [2024-11-20 09:05:27.612941] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:32.531 [2024-11-20 09:05:27.612966] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:32.531 [2024-11-20 09:05:27.612979] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:32.531 [2024-11-20 09:05:27.612992] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.531 09:05:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.531 09:05:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 [2024-11-20 09:05:28.007722] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:33.098 09:05:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:33.098 09:05:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.098 09:05:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 ************************************ 00:08:33.098 START TEST scheduler_create_thread 00:08:33.098 ************************************ 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 2 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 3 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 4 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 5 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 6 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 7 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.098 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.098 8 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.099 9 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.099 10 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.099 09:05:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.475 ************************************ 00:08:34.475 END TEST scheduler_create_thread 00:08:34.475 ************************************ 00:08:34.475 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.475 00:08:34.475 real 0m1.178s 00:08:34.475 user 0m0.010s 00:08:34.475 sys 0m0.010s 00:08:34.475 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.475 09:05:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.475 09:05:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:34.475 09:05:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59345 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59345 ']' 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59345 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59345 00:08:34.475 killing process with pid 59345 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59345' 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59345 00:08:34.475 09:05:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59345 00:08:34.734 [2024-11-20 09:05:29.681339] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:36.111 00:08:36.111 real 0m4.598s 00:08:36.111 user 0m8.687s 00:08:36.111 sys 0m0.593s 00:08:36.111 09:05:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.111 ************************************ 00:08:36.111 END TEST event_scheduler 00:08:36.111 ************************************ 00:08:36.111 09:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:36.111 09:05:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:36.111 09:05:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:36.111 09:05:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.111 09:05:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.111 09:05:30 event -- common/autotest_common.sh@10 -- # set +x 00:08:36.111 ************************************ 00:08:36.111 START TEST app_repeat 00:08:36.111 ************************************ 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59441 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:36.111 Process app_repeat pid: 59441 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59441' 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:36.111 spdk_app_start Round 0 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:36.111 09:05:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.111 09:05:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:36.111 [2024-11-20 09:05:30.982357] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:36.111 [2024-11-20 09:05:30.982550] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59441 ] 00:08:36.111 [2024-11-20 09:05:31.177849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.370 [2024-11-20 09:05:31.356241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.370 [2024-11-20 09:05:31.356243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.938 09:05:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.938 09:05:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:36.938 09:05:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.507 Malloc0 00:08:37.507 09:05:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.766 Malloc1 00:08:37.766 09:05:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.766 09:05:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:38.025 /dev/nbd0 00:08:38.284 09:05:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:38.284 09:05:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:38.284 1+0 records in 00:08:38.284 1+0 records out 00:08:38.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034726 s, 11.8 MB/s 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.284 09:05:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:38.284 09:05:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.284 09:05:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:38.284 09:05:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:38.543 /dev/nbd1 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:38.543 1+0 records in 00:08:38.543 1+0 records out 00:08:38.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440893 s, 9.3 MB/s 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:38.543 09:05:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.543 09:05:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.801 { 00:08:38.801 "nbd_device": "/dev/nbd0", 00:08:38.801 "bdev_name": "Malloc0" 00:08:38.801 }, 00:08:38.801 { 00:08:38.801 "nbd_device": "/dev/nbd1", 00:08:38.801 "bdev_name": "Malloc1" 00:08:38.801 } 00:08:38.801 ]' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.801 { 00:08:38.801 "nbd_device": "/dev/nbd0", 00:08:38.801 "bdev_name": "Malloc0" 00:08:38.801 }, 00:08:38.801 { 00:08:38.801 "nbd_device": "/dev/nbd1", 00:08:38.801 "bdev_name": "Malloc1" 00:08:38.801 } 00:08:38.801 ]' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.801 /dev/nbd1' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.801 /dev/nbd1' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:38.801 256+0 records in 00:08:38.801 256+0 records out 00:08:38.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106013 s, 98.9 MB/s 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.801 09:05:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:39.107 256+0 records in 00:08:39.107 256+0 records out 00:08:39.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028929 s, 36.2 MB/s 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:39.107 256+0 records in 00:08:39.107 256+0 records out 00:08:39.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357535 s, 29.3 MB/s 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:39.107 09:05:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:39.108 09:05:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.108 09:05:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:39.108 09:05:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:39.108 09:05:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:39.108 09:05:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.108 09:05:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:39.366 09:05:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:39.366 09:05:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:39.366 09:05:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:39.366 09:05:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.366 09:05:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.366 09:05:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:39.367 09:05:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.367 09:05:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.367 09:05:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.367 09:05:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.625 09:05:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.883 09:05:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:39.883 09:05:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:39.883 09:05:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:40.141 09:05:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:40.141 09:05:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:40.400 09:05:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:41.778 [2024-11-20 09:05:36.531211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.778 [2024-11-20 09:05:36.648802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.779 [2024-11-20 09:05:36.648810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.779 [2024-11-20 09:05:36.827190] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:41.779 [2024-11-20 09:05:36.827351] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:43.732 09:05:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:43.732 09:05:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:43.732 spdk_app_start Round 1 00:08:43.732 09:05:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:43.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.732 09:05:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:43.732 09:05:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.299 Malloc0 00:08:44.299 09:05:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:44.558 Malloc1 00:08:44.558 09:05:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.558 09:05:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:44.816 /dev/nbd0 00:08:44.816 09:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:44.816 09:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:44.816 1+0 records in 00:08:44.816 1+0 records out 00:08:44.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246371 s, 16.6 MB/s 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:44.816 09:05:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:44.816 09:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:44.816 09:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:44.817 09:05:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:45.074 /dev/nbd1 00:08:45.074 09:05:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:45.074 09:05:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:45.074 09:05:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:45.074 09:05:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:45.074 09:05:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:45.074 09:05:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:45.075 1+0 records in 00:08:45.075 1+0 records out 00:08:45.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038594 s, 10.6 MB/s 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:45.075 09:05:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:45.075 09:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:45.075 09:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:45.075 09:05:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:45.075 09:05:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.075 09:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.333 09:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:45.333 { 00:08:45.333 "nbd_device": "/dev/nbd0", 00:08:45.333 "bdev_name": "Malloc0" 00:08:45.333 }, 00:08:45.333 { 00:08:45.333 "nbd_device": "/dev/nbd1", 00:08:45.333 "bdev_name": "Malloc1" 00:08:45.333 } 00:08:45.333 ]' 00:08:45.333 09:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.333 09:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:45.333 { 00:08:45.333 "nbd_device": "/dev/nbd0", 00:08:45.333 "bdev_name": "Malloc0" 00:08:45.333 }, 00:08:45.333 { 00:08:45.333 "nbd_device": "/dev/nbd1", 00:08:45.333 "bdev_name": "Malloc1" 00:08:45.333 } 00:08:45.333 ]' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:45.591 /dev/nbd1' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:45.591 /dev/nbd1' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:45.591 256+0 records in 00:08:45.591 256+0 records out 00:08:45.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00716892 s, 146 MB/s 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:45.591 256+0 records in 00:08:45.591 256+0 records out 00:08:45.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261207 s, 40.1 MB/s 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:45.591 256+0 records in 00:08:45.591 256+0 records out 00:08:45.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300171 s, 34.9 MB/s 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.591 09:05:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:45.850 09:05:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.108 09:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:46.367 09:05:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:46.367 09:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:46.367 09:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:46.626 09:05:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:46.626 09:05:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:47.193 09:05:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:48.129 [2024-11-20 09:05:43.131018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.387 [2024-11-20 09:05:43.262711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.387 [2024-11-20 09:05:43.262737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.387 [2024-11-20 09:05:43.457658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:48.387 [2024-11-20 09:05:43.457799] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:50.353 spdk_app_start Round 2 00:08:50.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:50.353 09:05:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:50.353 09:05:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:50.353 09:05:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.353 09:05:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:50.353 09:05:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:50.611 Malloc0 00:08:50.611 09:05:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:51.177 Malloc1 00:08:51.177 09:05:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:51.177 /dev/nbd0 00:08:51.177 09:05:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:51.436 09:05:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.436 1+0 records in 00:08:51.436 1+0 records out 00:08:51.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023121 s, 17.7 MB/s 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.436 09:05:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.436 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.436 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.436 09:05:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:51.696 /dev/nbd1 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:51.696 1+0 records in 00:08:51.696 1+0 records out 00:08:51.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00225674 s, 1.8 MB/s 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.696 09:05:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.696 09:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:51.955 { 00:08:51.955 "nbd_device": "/dev/nbd0", 00:08:51.955 "bdev_name": "Malloc0" 00:08:51.955 }, 00:08:51.955 { 00:08:51.955 "nbd_device": "/dev/nbd1", 00:08:51.955 "bdev_name": "Malloc1" 00:08:51.955 } 00:08:51.955 ]' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:51.955 { 00:08:51.955 "nbd_device": "/dev/nbd0", 00:08:51.955 "bdev_name": "Malloc0" 00:08:51.955 }, 00:08:51.955 { 00:08:51.955 "nbd_device": "/dev/nbd1", 00:08:51.955 "bdev_name": "Malloc1" 00:08:51.955 } 00:08:51.955 ]' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:51.955 /dev/nbd1' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:51.955 /dev/nbd1' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:51.955 256+0 records in 00:08:51.955 256+0 records out 00:08:51.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00911295 s, 115 MB/s 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.955 09:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:51.955 256+0 records in 00:08:51.955 256+0 records out 00:08:51.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320069 s, 32.8 MB/s 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:51.955 256+0 records in 00:08:51.955 256+0 records out 00:08:51.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034462 s, 30.4 MB/s 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.955 09:05:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:52.522 09:05:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:52.522 09:05:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:52.522 09:05:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:52.522 09:05:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.522 09:05:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.523 09:05:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:52.523 09:05:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.523 09:05:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.523 09:05:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.523 09:05:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.781 09:05:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.041 09:05:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.041 09:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.041 09:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:53.041 09:05:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:53.041 09:05:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:53.609 09:05:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:54.987 [2024-11-20 09:05:49.749431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.987 [2024-11-20 09:05:49.889346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.987 [2024-11-20 09:05:49.889357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.987 [2024-11-20 09:05:50.103584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:54.987 [2024-11-20 09:05:50.104047] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:56.893 09:05:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59441 /var/tmp/spdk-nbd.sock 00:08:56.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59441 ']' 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:56.893 09:05:51 event.app_repeat -- event/event.sh@39 -- # killprocess 59441 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59441 ']' 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59441 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59441 00:08:56.893 killing process with pid 59441 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59441' 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59441 00:08:56.893 09:05:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59441 00:08:58.276 spdk_app_start is called in Round 0. 00:08:58.276 Shutdown signal received, stop current app iteration 00:08:58.276 Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization... 00:08:58.276 spdk_app_start is called in Round 1. 00:08:58.276 Shutdown signal received, stop current app iteration 00:08:58.276 Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization... 00:08:58.276 spdk_app_start is called in Round 2. 00:08:58.276 Shutdown signal received, stop current app iteration 00:08:58.276 Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization... 00:08:58.276 spdk_app_start is called in Round 3. 00:08:58.276 Shutdown signal received, stop current app iteration 00:08:58.276 09:05:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:58.276 09:05:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:58.276 00:08:58.276 real 0m22.094s 00:08:58.276 user 0m48.621s 00:08:58.276 sys 0m3.312s 00:08:58.276 09:05:53 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.276 ************************************ 00:08:58.276 END TEST app_repeat 00:08:58.276 ************************************ 00:08:58.276 09:05:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:58.276 09:05:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:58.276 09:05:53 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:58.276 09:05:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.276 09:05:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.276 09:05:53 event -- common/autotest_common.sh@10 -- # set +x 00:08:58.276 ************************************ 00:08:58.276 START TEST cpu_locks 00:08:58.276 ************************************ 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:58.276 * Looking for test storage... 00:08:58.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.276 09:05:53 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.276 --rc genhtml_branch_coverage=1 00:08:58.276 --rc genhtml_function_coverage=1 00:08:58.276 --rc genhtml_legend=1 00:08:58.276 --rc geninfo_all_blocks=1 00:08:58.276 --rc geninfo_unexecuted_blocks=1 00:08:58.276 00:08:58.276 ' 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.276 --rc genhtml_branch_coverage=1 00:08:58.276 --rc genhtml_function_coverage=1 00:08:58.276 --rc genhtml_legend=1 00:08:58.276 --rc geninfo_all_blocks=1 00:08:58.276 --rc geninfo_unexecuted_blocks=1 00:08:58.276 00:08:58.276 ' 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.276 --rc genhtml_branch_coverage=1 00:08:58.276 --rc genhtml_function_coverage=1 00:08:58.276 --rc genhtml_legend=1 00:08:58.276 --rc geninfo_all_blocks=1 00:08:58.276 --rc geninfo_unexecuted_blocks=1 00:08:58.276 00:08:58.276 ' 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.276 --rc genhtml_branch_coverage=1 00:08:58.276 --rc genhtml_function_coverage=1 00:08:58.276 --rc genhtml_legend=1 00:08:58.276 --rc geninfo_all_blocks=1 00:08:58.276 --rc geninfo_unexecuted_blocks=1 00:08:58.276 00:08:58.276 ' 00:08:58.276 09:05:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:58.276 09:05:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:58.276 09:05:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:58.276 09:05:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.276 09:05:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.276 ************************************ 00:08:58.276 START TEST default_locks 00:08:58.276 ************************************ 00:08:58.276 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:58.276 09:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59924 00:08:58.276 09:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59924 00:08:58.276 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59924 ']' 00:08:58.276 09:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:58.276 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.277 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.277 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.277 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.277 09:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 [2024-11-20 09:05:53.401380] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:58.535 [2024-11-20 09:05:53.402205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59924 ] 00:08:58.535 [2024-11-20 09:05:53.593582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.794 [2024-11-20 09:05:53.745650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.730 09:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.730 09:05:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:59.730 09:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59924 00:08:59.730 09:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59924 00:08:59.730 09:05:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59924 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59924 ']' 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59924 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59924 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.299 killing process with pid 59924 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59924' 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59924 00:09:00.299 09:05:55 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59924 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59924 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59924 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59924 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59924 ']' 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 ERROR: process (pid: 59924) is no longer running 00:09:02.832 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59924) - No such process 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:02.832 00:09:02.832 real 0m4.399s 00:09:02.832 user 0m4.350s 00:09:02.832 sys 0m0.792s 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.832 ************************************ 00:09:02.832 END TEST default_locks 00:09:02.832 ************************************ 00:09:02.832 09:05:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 09:05:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:02.832 09:05:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.832 09:05:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.832 09:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 ************************************ 00:09:02.832 START TEST default_locks_via_rpc 00:09:02.832 ************************************ 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59999 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59999 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59999 ']' 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.832 09:05:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.832 [2024-11-20 09:05:57.884515] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:02.832 [2024-11-20 09:05:57.884739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59999 ] 00:09:03.091 [2024-11-20 09:05:58.075217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.349 [2024-11-20 09:05:58.223968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59999 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59999 00:09:04.285 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:04.547 09:05:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59999 00:09:04.547 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59999 ']' 00:09:04.547 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59999 00:09:04.547 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59999 00:09:04.809 killing process with pid 59999 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59999' 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59999 00:09:04.809 09:05:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59999 00:09:07.342 ************************************ 00:09:07.342 END TEST default_locks_via_rpc 00:09:07.342 ************************************ 00:09:07.342 00:09:07.342 real 0m4.470s 00:09:07.342 user 0m4.459s 00:09:07.342 sys 0m0.804s 00:09:07.342 09:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.342 09:06:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.342 09:06:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:07.342 09:06:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.342 09:06:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.342 09:06:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:07.342 ************************************ 00:09:07.342 START TEST non_locking_app_on_locked_coremask 00:09:07.342 ************************************ 00:09:07.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60082 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60082 /var/tmp/spdk.sock 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60082 ']' 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.342 09:06:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:07.342 [2024-11-20 09:06:02.374997] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:07.342 [2024-11-20 09:06:02.375215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60082 ] 00:09:07.601 [2024-11-20 09:06:02.559055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.859 [2024-11-20 09:06:02.724862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60103 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60103 /var/tmp/spdk2.sock 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60103 ']' 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.795 09:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:08.795 [2024-11-20 09:06:03.799059] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:08.795 [2024-11-20 09:06:03.799549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60103 ] 00:09:09.054 [2024-11-20 09:06:04.012290] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:09.054 [2024-11-20 09:06:04.012357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.313 [2024-11-20 09:06:04.298871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.844 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.844 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:11.844 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60082 00:09:11.844 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60082 00:09:11.844 09:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:12.413 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60082 00:09:12.413 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60082 ']' 00:09:12.413 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60082 00:09:12.413 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:12.413 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.413 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60082 00:09:12.673 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.673 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.673 killing process with pid 60082 00:09:12.673 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60082' 00:09:12.673 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60082 00:09:12.673 09:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60082 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60103 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60103 ']' 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60103 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60103 00:09:17.948 killing process with pid 60103 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60103' 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60103 00:09:17.948 09:06:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60103 00:09:19.326 ************************************ 00:09:19.326 END TEST non_locking_app_on_locked_coremask 00:09:19.326 ************************************ 00:09:19.326 00:09:19.326 real 0m12.010s 00:09:19.326 user 0m12.361s 00:09:19.326 sys 0m1.732s 00:09:19.326 09:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.326 09:06:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 09:06:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:19.326 09:06:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.326 09:06:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.326 09:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.326 ************************************ 00:09:19.326 START TEST locking_app_on_unlocked_coremask 00:09:19.326 ************************************ 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:19.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60257 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60257 /var/tmp/spdk.sock 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60257 ']' 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.326 09:06:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.586 [2024-11-20 09:06:14.451595] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:19.586 [2024-11-20 09:06:14.452076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:09:19.586 [2024-11-20 09:06:14.636232] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:19.586 [2024-11-20 09:06:14.638385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.845 [2024-11-20 09:06:14.778445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60273 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60273 /var/tmp/spdk2.sock 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60273 ']' 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.781 09:06:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.781 [2024-11-20 09:06:15.802307] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:20.781 [2024-11-20 09:06:15.802812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60273 ] 00:09:21.042 [2024-11-20 09:06:16.006007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.299 [2024-11-20 09:06:16.292367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.888 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.888 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:23.888 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60273 00:09:23.888 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60273 00:09:23.888 09:06:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.454 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60257 00:09:24.454 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60257 ']' 00:09:24.454 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60257 00:09:24.454 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:24.454 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.454 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60257 00:09:24.712 killing process with pid 60257 00:09:24.712 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.712 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.712 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60257' 00:09:24.712 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60257 00:09:24.712 09:06:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60257 00:09:28.902 09:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60273 00:09:28.902 09:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60273 ']' 00:09:28.902 09:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60273 00:09:28.902 09:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:28.902 09:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.902 09:06:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60273 00:09:28.902 killing process with pid 60273 00:09:28.902 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.902 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.902 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60273' 00:09:28.902 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60273 00:09:28.902 09:06:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60273 00:09:31.459 00:09:31.459 real 0m11.881s 00:09:31.459 user 0m12.341s 00:09:31.459 sys 0m1.785s 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.459 ************************************ 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.459 END TEST locking_app_on_unlocked_coremask 00:09:31.459 ************************************ 00:09:31.459 09:06:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:31.459 09:06:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.459 09:06:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.459 09:06:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:31.459 ************************************ 00:09:31.459 START TEST locking_app_on_locked_coremask 00:09:31.459 ************************************ 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60421 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60421 /var/tmp/spdk.sock 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60421 ']' 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.459 09:06:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.459 [2024-11-20 09:06:26.397187] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:31.459 [2024-11-20 09:06:26.397622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60421 ] 00:09:31.717 [2024-11-20 09:06:26.588641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.717 [2024-11-20 09:06:26.735028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60443 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60443 /var/tmp/spdk2.sock 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60443 /var/tmp/spdk2.sock 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60443 /var/tmp/spdk2.sock 00:09:32.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60443 ']' 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.653 09:06:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:32.653 [2024-11-20 09:06:27.745605] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:32.653 [2024-11-20 09:06:27.745827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60443 ] 00:09:32.913 [2024-11-20 09:06:27.952156] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60421 has claimed it. 00:09:32.913 [2024-11-20 09:06:27.952243] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:33.481 ERROR: process (pid: 60443) is no longer running 00:09:33.481 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60443) - No such process 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60421 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60421 00:09:33.481 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60421 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60421 ']' 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60421 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60421 00:09:34.050 killing process with pid 60421 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60421' 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60421 00:09:34.050 09:06:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60421 00:09:36.584 00:09:36.584 real 0m5.035s 00:09:36.584 user 0m5.249s 00:09:36.584 sys 0m1.023s 00:09:36.584 09:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.584 09:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.584 ************************************ 00:09:36.584 END TEST locking_app_on_locked_coremask 00:09:36.584 ************************************ 00:09:36.584 09:06:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:36.584 09:06:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.584 09:06:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.584 09:06:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:36.584 ************************************ 00:09:36.584 START TEST locking_overlapped_coremask 00:09:36.584 ************************************ 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60512 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60512 /var/tmp/spdk.sock 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60512 ']' 00:09:36.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.584 09:06:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:36.584 [2024-11-20 09:06:31.486963] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:36.584 [2024-11-20 09:06:31.487203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60512 ] 00:09:36.584 [2024-11-20 09:06:31.685252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:36.852 [2024-11-20 09:06:31.871989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.852 [2024-11-20 09:06:31.874110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.852 [2024-11-20 09:06:31.874148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60536 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60536 /var/tmp/spdk2.sock 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60536 /var/tmp/spdk2.sock 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60536 /var/tmp/spdk2.sock 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60536 ']' 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:37.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.802 09:06:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:38.062 [2024-11-20 09:06:32.984633] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:38.062 [2024-11-20 09:06:32.984904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60536 ] 00:09:38.321 [2024-11-20 09:06:33.188339] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60512 has claimed it. 00:09:38.321 [2024-11-20 09:06:33.188499] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:38.581 ERROR: process (pid: 60536) is no longer running 00:09:38.581 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60536) - No such process 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60512 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60512 ']' 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60512 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.581 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60512 00:09:38.840 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.840 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.840 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60512' 00:09:38.840 killing process with pid 60512 00:09:38.840 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60512 00:09:38.840 09:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60512 00:09:41.373 00:09:41.373 real 0m4.817s 00:09:41.373 user 0m12.726s 00:09:41.373 sys 0m0.898s 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.373 ************************************ 00:09:41.373 END TEST locking_overlapped_coremask 00:09:41.373 ************************************ 00:09:41.373 09:06:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:41.373 09:06:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.373 09:06:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.373 09:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:41.373 ************************************ 00:09:41.373 START TEST locking_overlapped_coremask_via_rpc 00:09:41.373 ************************************ 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60605 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60605 /var/tmp/spdk.sock 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60605 ']' 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.373 09:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.373 [2024-11-20 09:06:36.361921] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:41.373 [2024-11-20 09:06:36.362130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:09:41.632 [2024-11-20 09:06:36.557197] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:41.632 [2024-11-20 09:06:36.557264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:41.632 [2024-11-20 09:06:36.719255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.632 [2024-11-20 09:06:36.719396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.632 [2024-11-20 09:06:36.719417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60623 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60623 /var/tmp/spdk2.sock 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60623 ']' 00:09:42.569 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:42.828 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.828 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:42.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:42.828 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.828 09:06:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.828 [2024-11-20 09:06:37.832090] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:42.828 [2024-11-20 09:06:37.832520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:09:43.087 [2024-11-20 09:06:38.038870] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:43.087 [2024-11-20 09:06:38.038961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.346 [2024-11-20 09:06:38.366399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.346 [2024-11-20 09:06:38.369855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.346 [2024-11-20 09:06:38.369880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.945 [2024-11-20 09:06:40.582991] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60605 has claimed it. 00:09:45.945 request: 00:09:45.945 { 00:09:45.945 "method": "framework_enable_cpumask_locks", 00:09:45.945 "req_id": 1 00:09:45.945 } 00:09:45.945 Got JSON-RPC error response 00:09:45.945 response: 00:09:45.945 { 00:09:45.945 "code": -32603, 00:09:45.945 "message": "Failed to claim CPU core: 2" 00:09:45.945 } 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60605 /var/tmp/spdk.sock 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60605 ']' 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60623 /var/tmp/spdk2.sock 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60623 ']' 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.945 09:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:46.203 00:09:46.203 real 0m4.949s 00:09:46.203 user 0m1.808s 00:09:46.203 sys 0m0.234s 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.203 ************************************ 00:09:46.203 END TEST locking_overlapped_coremask_via_rpc 00:09:46.203 ************************************ 00:09:46.203 09:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.203 09:06:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:46.203 09:06:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60605 ]] 00:09:46.203 09:06:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60605 00:09:46.203 09:06:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60605 ']' 00:09:46.203 09:06:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60605 00:09:46.203 09:06:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:46.203 09:06:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.203 09:06:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60605 00:09:46.204 killing process with pid 60605 00:09:46.204 09:06:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.204 09:06:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.204 09:06:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60605' 00:09:46.204 09:06:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60605 00:09:46.204 09:06:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60605 00:09:48.737 09:06:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60623 ]] 00:09:48.737 09:06:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60623 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60623 ']' 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60623 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60623 00:09:48.737 killing process with pid 60623 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60623' 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60623 00:09:48.737 09:06:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60623 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:51.272 Process with pid 60605 is not found 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60605 ]] 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60605 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60605 ']' 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60605 00:09:51.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60605) - No such process 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60605 is not found' 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60623 ]] 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60623 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60623 ']' 00:09:51.272 Process with pid 60623 is not found 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60623 00:09:51.272 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60623) - No such process 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60623 is not found' 00:09:51.272 09:06:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:51.272 00:09:51.272 real 0m53.043s 00:09:51.272 user 1m30.403s 00:09:51.272 sys 0m8.855s 00:09:51.272 ************************************ 00:09:51.272 END TEST cpu_locks 00:09:51.272 ************************************ 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.272 09:06:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:51.272 ************************************ 00:09:51.272 END TEST event 00:09:51.272 ************************************ 00:09:51.272 00:09:51.272 real 1m25.214s 00:09:51.272 user 2m35.144s 00:09:51.272 sys 0m13.427s 00:09:51.272 09:06:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.272 09:06:46 event -- common/autotest_common.sh@10 -- # set +x 00:09:51.273 09:06:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:51.273 09:06:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.273 09:06:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.273 09:06:46 -- common/autotest_common.sh@10 -- # set +x 00:09:51.273 ************************************ 00:09:51.273 START TEST thread 00:09:51.273 ************************************ 00:09:51.273 09:06:46 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:51.273 * Looking for test storage... 00:09:51.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:51.273 09:06:46 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.273 09:06:46 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.273 09:06:46 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.273 09:06:46 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.273 09:06:46 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.273 09:06:46 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.273 09:06:46 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.273 09:06:46 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.273 09:06:46 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.273 09:06:46 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.273 09:06:46 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.273 09:06:46 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.273 09:06:46 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.273 09:06:46 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.273 09:06:46 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.273 09:06:46 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:51.273 09:06:46 thread -- scripts/common.sh@345 -- # : 1 00:09:51.273 09:06:46 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.273 09:06:46 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.273 09:06:46 thread -- scripts/common.sh@365 -- # decimal 1 00:09:51.273 09:06:46 thread -- scripts/common.sh@353 -- # local d=1 00:09:51.273 09:06:46 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.273 09:06:46 thread -- scripts/common.sh@355 -- # echo 1 00:09:51.273 09:06:46 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.273 09:06:46 thread -- scripts/common.sh@366 -- # decimal 2 00:09:51.273 09:06:46 thread -- scripts/common.sh@353 -- # local d=2 00:09:51.532 09:06:46 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.532 09:06:46 thread -- scripts/common.sh@355 -- # echo 2 00:09:51.532 09:06:46 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.532 09:06:46 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.532 09:06:46 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.532 09:06:46 thread -- scripts/common.sh@368 -- # return 0 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.532 --rc genhtml_branch_coverage=1 00:09:51.532 --rc genhtml_function_coverage=1 00:09:51.532 --rc genhtml_legend=1 00:09:51.532 --rc geninfo_all_blocks=1 00:09:51.532 --rc geninfo_unexecuted_blocks=1 00:09:51.532 00:09:51.532 ' 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.532 --rc genhtml_branch_coverage=1 00:09:51.532 --rc genhtml_function_coverage=1 00:09:51.532 --rc genhtml_legend=1 00:09:51.532 --rc geninfo_all_blocks=1 00:09:51.532 --rc geninfo_unexecuted_blocks=1 00:09:51.532 00:09:51.532 ' 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.532 --rc genhtml_branch_coverage=1 00:09:51.532 --rc genhtml_function_coverage=1 00:09:51.532 --rc genhtml_legend=1 00:09:51.532 --rc geninfo_all_blocks=1 00:09:51.532 --rc geninfo_unexecuted_blocks=1 00:09:51.532 00:09:51.532 ' 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.532 --rc genhtml_branch_coverage=1 00:09:51.532 --rc genhtml_function_coverage=1 00:09:51.532 --rc genhtml_legend=1 00:09:51.532 --rc geninfo_all_blocks=1 00:09:51.532 --rc geninfo_unexecuted_blocks=1 00:09:51.532 00:09:51.532 ' 00:09:51.532 09:06:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.532 09:06:46 thread -- common/autotest_common.sh@10 -- # set +x 00:09:51.532 ************************************ 00:09:51.532 START TEST thread_poller_perf 00:09:51.532 ************************************ 00:09:51.532 09:06:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:51.532 [2024-11-20 09:06:46.454716] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:51.532 [2024-11-20 09:06:46.455068] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60824 ] 00:09:51.791 [2024-11-20 09:06:46.650388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.791 [2024-11-20 09:06:46.822235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.791 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:53.216 [2024-11-20T09:06:48.336Z] ====================================== 00:09:53.216 [2024-11-20T09:06:48.336Z] busy:2218925052 (cyc) 00:09:53.216 [2024-11-20T09:06:48.336Z] total_run_count: 263000 00:09:53.216 [2024-11-20T09:06:48.336Z] tsc_hz: 2200000000 (cyc) 00:09:53.216 [2024-11-20T09:06:48.336Z] ====================================== 00:09:53.216 [2024-11-20T09:06:48.336Z] poller_cost: 8436 (cyc), 3834 (nsec) 00:09:53.216 00:09:53.216 ************************************ 00:09:53.216 END TEST thread_poller_perf 00:09:53.216 ************************************ 00:09:53.216 real 0m1.701s 00:09:53.216 user 0m1.466s 00:09:53.216 sys 0m0.121s 00:09:53.216 09:06:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.216 09:06:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:53.216 09:06:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:53.216 09:06:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:53.216 09:06:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.216 09:06:48 thread -- common/autotest_common.sh@10 -- # set +x 00:09:53.216 ************************************ 00:09:53.216 START TEST thread_poller_perf 00:09:53.216 ************************************ 00:09:53.216 09:06:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:53.216 [2024-11-20 09:06:48.212195] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:53.216 [2024-11-20 09:06:48.212364] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:09:53.474 [2024-11-20 09:06:48.406634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.474 [2024-11-20 09:06:48.576117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.474 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:54.849 [2024-11-20T09:06:49.969Z] ====================================== 00:09:54.849 [2024-11-20T09:06:49.969Z] busy:2205431732 (cyc) 00:09:54.849 [2024-11-20T09:06:49.969Z] total_run_count: 3310000 00:09:54.849 [2024-11-20T09:06:49.969Z] tsc_hz: 2200000000 (cyc) 00:09:54.849 [2024-11-20T09:06:49.969Z] ====================================== 00:09:54.849 [2024-11-20T09:06:49.969Z] poller_cost: 666 (cyc), 302 (nsec) 00:09:54.849 00:09:54.849 real 0m1.688s 00:09:54.849 user 0m1.460s 00:09:54.849 sys 0m0.115s 00:09:54.849 ************************************ 00:09:54.849 END TEST thread_poller_perf 00:09:54.849 ************************************ 00:09:54.849 09:06:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.849 09:06:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:54.849 09:06:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:54.849 00:09:54.849 real 0m3.696s 00:09:54.849 user 0m3.077s 00:09:54.849 sys 0m0.382s 00:09:54.849 ************************************ 00:09:54.849 END TEST thread 00:09:54.849 ************************************ 00:09:54.849 09:06:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.849 09:06:49 thread -- common/autotest_common.sh@10 -- # set +x 00:09:54.849 09:06:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:54.849 09:06:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:54.849 09:06:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.849 09:06:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.849 09:06:49 -- common/autotest_common.sh@10 -- # set +x 00:09:54.849 ************************************ 00:09:54.849 START TEST app_cmdline 00:09:54.849 ************************************ 00:09:54.849 09:06:49 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:55.109 * Looking for test storage... 00:09:55.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.109 09:06:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.109 --rc genhtml_branch_coverage=1 00:09:55.109 --rc genhtml_function_coverage=1 00:09:55.109 --rc genhtml_legend=1 00:09:55.109 --rc geninfo_all_blocks=1 00:09:55.109 --rc geninfo_unexecuted_blocks=1 00:09:55.109 00:09:55.109 ' 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.109 --rc genhtml_branch_coverage=1 00:09:55.109 --rc genhtml_function_coverage=1 00:09:55.109 --rc genhtml_legend=1 00:09:55.109 --rc geninfo_all_blocks=1 00:09:55.109 --rc geninfo_unexecuted_blocks=1 00:09:55.109 00:09:55.109 ' 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.109 --rc genhtml_branch_coverage=1 00:09:55.109 --rc genhtml_function_coverage=1 00:09:55.109 --rc genhtml_legend=1 00:09:55.109 --rc geninfo_all_blocks=1 00:09:55.109 --rc geninfo_unexecuted_blocks=1 00:09:55.109 00:09:55.109 ' 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.109 --rc genhtml_branch_coverage=1 00:09:55.109 --rc genhtml_function_coverage=1 00:09:55.109 --rc genhtml_legend=1 00:09:55.109 --rc geninfo_all_blocks=1 00:09:55.109 --rc geninfo_unexecuted_blocks=1 00:09:55.109 00:09:55.109 ' 00:09:55.109 09:06:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:55.109 09:06:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60955 00:09:55.109 09:06:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:55.109 09:06:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60955 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60955 ']' 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.109 09:06:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.110 09:06:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:55.369 [2024-11-20 09:06:50.303794] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:55.369 [2024-11-20 09:06:50.304317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60955 ] 00:09:55.628 [2024-11-20 09:06:50.501866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.628 [2024-11-20 09:06:50.676926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.007 09:06:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.007 09:06:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:57.007 09:06:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:57.007 { 00:09:57.007 "version": "SPDK v25.01-pre git sha1 a5dab6cf7", 00:09:57.007 "fields": { 00:09:57.007 "major": 25, 00:09:57.007 "minor": 1, 00:09:57.007 "patch": 0, 00:09:57.007 "suffix": "-pre", 00:09:57.007 "commit": "a5dab6cf7" 00:09:57.007 } 00:09:57.007 } 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:57.007 09:06:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:57.007 09:06:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:57.575 request: 00:09:57.575 { 00:09:57.575 "method": "env_dpdk_get_mem_stats", 00:09:57.575 "req_id": 1 00:09:57.575 } 00:09:57.575 Got JSON-RPC error response 00:09:57.575 response: 00:09:57.575 { 00:09:57.575 "code": -32601, 00:09:57.575 "message": "Method not found" 00:09:57.575 } 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.575 09:06:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60955 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60955 ']' 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60955 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60955 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60955' 00:09:57.575 killing process with pid 60955 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 60955 00:09:57.575 09:06:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 60955 00:10:00.109 00:10:00.109 real 0m5.093s 00:10:00.109 user 0m5.505s 00:10:00.109 sys 0m0.794s 00:10:00.109 09:06:55 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.109 09:06:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:00.109 ************************************ 00:10:00.109 END TEST app_cmdline 00:10:00.109 ************************************ 00:10:00.109 09:06:55 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:00.109 09:06:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.109 09:06:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.109 09:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:00.109 ************************************ 00:10:00.109 START TEST version 00:10:00.109 ************************************ 00:10:00.109 09:06:55 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:00.109 * Looking for test storage... 00:10:00.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:00.109 09:06:55 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.109 09:06:55 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.109 09:06:55 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.367 09:06:55 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.367 09:06:55 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.367 09:06:55 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.367 09:06:55 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.367 09:06:55 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.367 09:06:55 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.367 09:06:55 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.367 09:06:55 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.367 09:06:55 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.367 09:06:55 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.367 09:06:55 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.367 09:06:55 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.367 09:06:55 version -- scripts/common.sh@344 -- # case "$op" in 00:10:00.367 09:06:55 version -- scripts/common.sh@345 -- # : 1 00:10:00.367 09:06:55 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.367 09:06:55 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.367 09:06:55 version -- scripts/common.sh@365 -- # decimal 1 00:10:00.367 09:06:55 version -- scripts/common.sh@353 -- # local d=1 00:10:00.367 09:06:55 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.367 09:06:55 version -- scripts/common.sh@355 -- # echo 1 00:10:00.367 09:06:55 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.367 09:06:55 version -- scripts/common.sh@366 -- # decimal 2 00:10:00.367 09:06:55 version -- scripts/common.sh@353 -- # local d=2 00:10:00.367 09:06:55 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.367 09:06:55 version -- scripts/common.sh@355 -- # echo 2 00:10:00.367 09:06:55 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.367 09:06:55 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.367 09:06:55 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.367 09:06:55 version -- scripts/common.sh@368 -- # return 0 00:10:00.367 09:06:55 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.367 09:06:55 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.367 --rc genhtml_branch_coverage=1 00:10:00.367 --rc genhtml_function_coverage=1 00:10:00.367 --rc genhtml_legend=1 00:10:00.367 --rc geninfo_all_blocks=1 00:10:00.367 --rc geninfo_unexecuted_blocks=1 00:10:00.367 00:10:00.367 ' 00:10:00.367 09:06:55 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.367 --rc genhtml_branch_coverage=1 00:10:00.367 --rc genhtml_function_coverage=1 00:10:00.367 --rc genhtml_legend=1 00:10:00.367 --rc geninfo_all_blocks=1 00:10:00.367 --rc geninfo_unexecuted_blocks=1 00:10:00.367 00:10:00.367 ' 00:10:00.367 09:06:55 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.367 --rc genhtml_branch_coverage=1 00:10:00.367 --rc genhtml_function_coverage=1 00:10:00.367 --rc genhtml_legend=1 00:10:00.367 --rc geninfo_all_blocks=1 00:10:00.367 --rc geninfo_unexecuted_blocks=1 00:10:00.367 00:10:00.367 ' 00:10:00.367 09:06:55 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.367 --rc genhtml_branch_coverage=1 00:10:00.367 --rc genhtml_function_coverage=1 00:10:00.367 --rc genhtml_legend=1 00:10:00.367 --rc geninfo_all_blocks=1 00:10:00.367 --rc geninfo_unexecuted_blocks=1 00:10:00.367 00:10:00.367 ' 00:10:00.367 09:06:55 version -- app/version.sh@17 -- # get_header_version major 00:10:00.367 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:00.367 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:10:00.367 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:10:00.367 09:06:55 version -- app/version.sh@17 -- # major=25 00:10:00.367 09:06:55 version -- app/version.sh@18 -- # get_header_version minor 00:10:00.367 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:00.367 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:10:00.367 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:10:00.367 09:06:55 version -- app/version.sh@18 -- # minor=1 00:10:00.367 09:06:55 version -- app/version.sh@19 -- # get_header_version patch 00:10:00.367 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:00.367 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:10:00.367 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:10:00.367 09:06:55 version -- app/version.sh@19 -- # patch=0 00:10:00.367 09:06:55 version -- app/version.sh@20 -- # get_header_version suffix 00:10:00.368 09:06:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:00.368 09:06:55 version -- app/version.sh@14 -- # cut -f2 00:10:00.368 09:06:55 version -- app/version.sh@14 -- # tr -d '"' 00:10:00.368 09:06:55 version -- app/version.sh@20 -- # suffix=-pre 00:10:00.368 09:06:55 version -- app/version.sh@22 -- # version=25.1 00:10:00.368 09:06:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:00.368 09:06:55 version -- app/version.sh@28 -- # version=25.1rc0 00:10:00.368 09:06:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:00.368 09:06:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:00.368 09:06:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:00.368 09:06:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:00.368 00:10:00.368 real 0m0.255s 00:10:00.368 user 0m0.161s 00:10:00.368 sys 0m0.133s 00:10:00.368 09:06:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.368 ************************************ 00:10:00.368 09:06:55 version -- common/autotest_common.sh@10 -- # set +x 00:10:00.368 END TEST version 00:10:00.368 ************************************ 00:10:00.368 09:06:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:00.368 09:06:55 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:00.368 09:06:55 -- spdk/autotest.sh@194 -- # uname -s 00:10:00.368 09:06:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:00.368 09:06:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:00.368 09:06:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:00.368 09:06:55 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:10:00.368 09:06:55 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:00.368 09:06:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:00.368 09:06:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.368 09:06:55 -- common/autotest_common.sh@10 -- # set +x 00:10:00.368 ************************************ 00:10:00.368 START TEST blockdev_nvme 00:10:00.368 ************************************ 00:10:00.368 09:06:55 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:00.626 * Looking for test storage... 00:10:00.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.627 09:06:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.627 --rc genhtml_branch_coverage=1 00:10:00.627 --rc genhtml_function_coverage=1 00:10:00.627 --rc genhtml_legend=1 00:10:00.627 --rc geninfo_all_blocks=1 00:10:00.627 --rc geninfo_unexecuted_blocks=1 00:10:00.627 00:10:00.627 ' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.627 --rc genhtml_branch_coverage=1 00:10:00.627 --rc genhtml_function_coverage=1 00:10:00.627 --rc genhtml_legend=1 00:10:00.627 --rc geninfo_all_blocks=1 00:10:00.627 --rc geninfo_unexecuted_blocks=1 00:10:00.627 00:10:00.627 ' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.627 --rc genhtml_branch_coverage=1 00:10:00.627 --rc genhtml_function_coverage=1 00:10:00.627 --rc genhtml_legend=1 00:10:00.627 --rc geninfo_all_blocks=1 00:10:00.627 --rc geninfo_unexecuted_blocks=1 00:10:00.627 00:10:00.627 ' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.627 --rc genhtml_branch_coverage=1 00:10:00.627 --rc genhtml_function_coverage=1 00:10:00.627 --rc genhtml_legend=1 00:10:00.627 --rc geninfo_all_blocks=1 00:10:00.627 --rc geninfo_unexecuted_blocks=1 00:10:00.627 00:10:00.627 ' 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:00.627 09:06:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61149 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61149 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61149 ']' 00:10:00.627 09:06:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.627 09:06:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:00.886 [2024-11-20 09:06:55.769598] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:00.886 [2024-11-20 09:06:55.769858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61149 ] 00:10:00.886 [2024-11-20 09:06:55.967823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.145 [2024-11-20 09:06:56.126236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.162 09:06:57 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.162 09:06:57 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:10:02.162 09:06:57 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:02.162 09:06:57 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:10:02.162 09:06:57 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:10:02.162 09:06:57 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:02.162 09:06:57 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.162 09:06:57 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:02.162 09:06:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.162 09:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.421 09:06:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.421 09:06:57 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:02.421 09:06:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.421 09:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.421 09:06:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.421 09:06:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:10:02.421 09:06:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:02.422 09:06:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.422 09:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.422 09:06:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.422 09:06:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:02.422 09:06:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.422 09:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7c9fc595-e761-447e-a951-eba56a2a9e37"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7c9fc595-e761-447e-a951-eba56a2a9e37",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "aa1031c6-46d0-4c06-8f2e-cd9060d4b08c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "aa1031c6-46d0-4c06-8f2e-cd9060d4b08c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "73a3d976-e109-47fb-8ce1-a2a3a651655c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "73a3d976-e109-47fb-8ce1-a2a3a651655c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d044f228-caa5-4d88-b393-6be2d7f611c5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d044f228-caa5-4d88-b393-6be2d7f611c5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c47ed954-ef40-4b68-b00b-9c3d22aff96d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c47ed954-ef40-4b68-b00b-9c3d22aff96d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a1787f91-273b-412d-944a-15db40c1e28d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a1787f91-273b-412d-944a-15db40c1e28d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:02.681 09:06:57 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61149 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61149 ']' 00:10:02.681 09:06:57 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61149 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61149 00:10:02.682 killing process with pid 61149 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61149' 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61149 00:10:02.682 09:06:57 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61149 00:10:05.217 09:07:00 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:05.217 09:07:00 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:05.217 09:07:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:05.217 09:07:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.217 09:07:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.217 ************************************ 00:10:05.217 START TEST bdev_hello_world 00:10:05.217 ************************************ 00:10:05.217 09:07:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:05.476 [2024-11-20 09:07:00.406944] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:05.476 [2024-11-20 09:07:00.407355] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61244 ] 00:10:05.733 [2024-11-20 09:07:00.596172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.733 [2024-11-20 09:07:00.749736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.670 [2024-11-20 09:07:01.481120] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:06.670 [2024-11-20 09:07:01.481221] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:06.670 [2024-11-20 09:07:01.481271] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:06.670 [2024-11-20 09:07:01.485349] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:06.670 [2024-11-20 09:07:01.485882] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:06.670 [2024-11-20 09:07:01.485927] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:06.670 [2024-11-20 09:07:01.486234] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:06.670 00:10:06.670 [2024-11-20 09:07:01.486267] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:07.607 00:10:07.607 real 0m2.397s 00:10:07.607 user 0m1.949s 00:10:07.607 sys 0m0.331s 00:10:07.607 09:07:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.607 ************************************ 00:10:07.607 END TEST bdev_hello_world 00:10:07.607 ************************************ 00:10:07.607 09:07:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:07.866 09:07:02 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:07.866 09:07:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.866 09:07:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.866 09:07:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.866 ************************************ 00:10:07.866 START TEST bdev_bounds 00:10:07.866 ************************************ 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:07.866 Process bdevio pid: 61297 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61297 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61297' 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61297 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61297 ']' 00:10:07.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.866 09:07:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:07.866 [2024-11-20 09:07:02.868378] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:07.866 [2024-11-20 09:07:02.868972] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:10:08.125 [2024-11-20 09:07:03.067599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:08.125 [2024-11-20 09:07:03.240142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.125 [2024-11-20 09:07:03.240502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.125 [2024-11-20 09:07:03.240514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.060 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.060 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:09.060 09:07:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:09.060 I/O targets: 00:10:09.060 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:09.060 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:09.060 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:09.060 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:09.060 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:09.060 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:09.060 00:10:09.060 00:10:09.060 CUnit - A unit testing framework for C - Version 2.1-3 00:10:09.060 http://cunit.sourceforge.net/ 00:10:09.060 00:10:09.060 00:10:09.060 Suite: bdevio tests on: Nvme3n1 00:10:09.060 Test: blockdev write read block ...passed 00:10:09.060 Test: blockdev write zeroes read block ...passed 00:10:09.060 Test: blockdev write zeroes read no split ...passed 00:10:09.319 Test: blockdev write zeroes read split ...passed 00:10:09.319 Test: blockdev write zeroes read split partial ...passed 00:10:09.319 Test: blockdev reset ...[2024-11-20 09:07:04.220200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:09.319 [2024-11-20 09:07:04.225159] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:09.319 passed 00:10:09.319 Test: blockdev write read 8 blocks ...passed 00:10:09.320 Test: blockdev write read size > 128k ...passed 00:10:09.320 Test: blockdev write read invalid size ...passed 00:10:09.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.320 Test: blockdev write read max offset ...passed 00:10:09.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.320 Test: blockdev writev readv 8 blocks ...passed 00:10:09.320 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.320 Test: blockdev writev readv block ...passed 00:10:09.320 Test: blockdev writev readv size > 128k ...passed 00:10:09.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.320 Test: blockdev comparev and writev ...[2024-11-20 09:07:04.235335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c740a000 len:0x1000 00:10:09.320 [2024-11-20 09:07:04.235683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:09.320 passed 00:10:09.320 Test: blockdev nvme passthru rw ...passed 00:10:09.320 Test: blockdev nvme passthru vendor specific ...passed 00:10:09.320 Test: blockdev nvme admin passthru ...[2024-11-20 09:07:04.236870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:09.320 [2024-11-20 09:07:04.236923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:09.320 passed 00:10:09.320 Test: blockdev copy ...passed 00:10:09.320 Suite: bdevio tests on: Nvme2n3 00:10:09.320 Test: blockdev write read block ...passed 00:10:09.320 Test: blockdev write zeroes read block ...passed 00:10:09.320 Test: blockdev write zeroes read no split ...passed 00:10:09.320 Test: blockdev write zeroes read split ...passed 00:10:09.320 Test: blockdev write zeroes read split partial ...passed 00:10:09.320 Test: blockdev reset ...[2024-11-20 09:07:04.306666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:09.320 passed 00:10:09.320 Test: blockdev write read 8 blocks ...[2024-11-20 09:07:04.312332] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:09.320 passed 00:10:09.320 Test: blockdev write read size > 128k ...passed 00:10:09.320 Test: blockdev write read invalid size ...passed 00:10:09.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.320 Test: blockdev write read max offset ...passed 00:10:09.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.320 Test: blockdev writev readv 8 blocks ...passed 00:10:09.320 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.320 Test: blockdev writev readv block ...passed 00:10:09.320 Test: blockdev writev readv size > 128k ...passed 00:10:09.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.320 Test: blockdev comparev and writev ...[2024-11-20 09:07:04.321893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:10:09.320 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2aae06000 len:0x1000 00:10:09.320 [2024-11-20 09:07:04.322093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:09.320 passed 00:10:09.320 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:04.323507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:09.320 passed 00:10:09.320 Test: blockdev nvme admin passthru ...[2024-11-20 09:07:04.323548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:09.320 passed 00:10:09.320 Test: blockdev copy ...passed 00:10:09.320 Suite: bdevio tests on: Nvme2n2 00:10:09.320 Test: blockdev write read block ...passed 00:10:09.320 Test: blockdev write zeroes read block ...passed 00:10:09.320 Test: blockdev write zeroes read no split ...passed 00:10:09.320 Test: blockdev write zeroes read split ...passed 00:10:09.320 Test: blockdev write zeroes read split partial ...passed 00:10:09.320 Test: blockdev reset ...[2024-11-20 09:07:04.393814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:09.320 [2024-11-20 09:07:04.398862] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:09.320 passed 00:10:09.320 Test: blockdev write read 8 blocks ...passed 00:10:09.320 Test: blockdev write read size > 128k ...passed 00:10:09.320 Test: blockdev write read invalid size ...passed 00:10:09.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.320 Test: blockdev write read max offset ...passed 00:10:09.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.320 Test: blockdev writev readv 8 blocks ...passed 00:10:09.320 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.320 Test: blockdev writev readv block ...passed 00:10:09.320 Test: blockdev writev readv size > 128k ...passed 00:10:09.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.320 Test: blockdev comparev and writev ...[2024-11-20 09:07:04.409445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e2c3c000 len:0x1000 00:10:09.320 [2024-11-20 09:07:04.409576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:09.320 passed 00:10:09.320 Test: blockdev nvme passthru rw ...passed 00:10:09.320 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:04.410641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:09.320 [2024-11-20 09:07:04.410702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:09.320 passed 00:10:09.320 Test: blockdev nvme admin passthru ...passed 00:10:09.320 Test: blockdev copy ...passed 00:10:09.320 Suite: bdevio tests on: Nvme2n1 00:10:09.320 Test: blockdev write read block ...passed 00:10:09.320 Test: blockdev write zeroes read block ...passed 00:10:09.320 Test: blockdev write zeroes read no split ...passed 00:10:09.580 Test: blockdev write zeroes read split ...passed 00:10:09.580 Test: blockdev write zeroes read split partial ...passed 00:10:09.580 Test: blockdev reset ...[2024-11-20 09:07:04.482623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:09.580 passed 00:10:09.580 Test: blockdev write read 8 blocks ...[2024-11-20 09:07:04.487860] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:09.580 passed 00:10:09.580 Test: blockdev write read size > 128k ...passed 00:10:09.580 Test: blockdev write read invalid size ...passed 00:10:09.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.580 Test: blockdev write read max offset ...passed 00:10:09.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.580 Test: blockdev writev readv 8 blocks ...passed 00:10:09.580 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.580 Test: blockdev writev readv block ...passed 00:10:09.580 Test: blockdev writev readv size > 128k ...passed 00:10:09.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.580 Test: blockdev comparev and writev ...[2024-11-20 09:07:04.498020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e2c38000 len:0x1000 00:10:09.580 [2024-11-20 09:07:04.498085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:09.580 passed 00:10:09.580 Test: blockdev nvme passthru rw ...passed 00:10:09.580 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:04.499142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:09.580 [2024-11-20 09:07:04.499179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:09.580 passed 00:10:09.580 Test: blockdev nvme admin passthru ...passed 00:10:09.580 Test: blockdev copy ...passed 00:10:09.580 Suite: bdevio tests on: Nvme1n1 00:10:09.580 Test: blockdev write read block ...passed 00:10:09.580 Test: blockdev write zeroes read block ...passed 00:10:09.580 Test: blockdev write zeroes read no split ...passed 00:10:09.580 Test: blockdev write zeroes read split ...passed 00:10:09.580 Test: blockdev write zeroes read split partial ...passed 00:10:09.580 Test: blockdev reset ...[2024-11-20 09:07:04.567761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:09.580 [2024-11-20 09:07:04.572767] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:09.580 passed 00:10:09.580 Test: blockdev write read 8 blocks ...passed 00:10:09.580 Test: blockdev write read size > 128k ...passed 00:10:09.580 Test: blockdev write read invalid size ...passed 00:10:09.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.580 Test: blockdev write read max offset ...passed 00:10:09.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.580 Test: blockdev writev readv 8 blocks ...passed 00:10:09.580 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.580 Test: blockdev writev readv block ...passed 00:10:09.580 Test: blockdev writev readv size > 128k ...passed 00:10:09.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.580 Test: blockdev comparev and writev ...[2024-11-20 09:07:04.582247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:10:09.580 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2e2c34000 len:0x1000 00:10:09.580 [2024-11-20 09:07:04.582572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:09.580 passed 00:10:09.580 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:04.583535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:09.580 [2024-11-20 09:07:04.583580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:09.580 passed 00:10:09.580 Test: blockdev nvme admin passthru ...passed 00:10:09.580 Test: blockdev copy ...passed 00:10:09.580 Suite: bdevio tests on: Nvme0n1 00:10:09.580 Test: blockdev write read block ...passed 00:10:09.580 Test: blockdev write zeroes read block ...passed 00:10:09.580 Test: blockdev write zeroes read no split ...passed 00:10:09.580 Test: blockdev write zeroes read split ...passed 00:10:09.580 Test: blockdev write zeroes read split partial ...passed 00:10:09.581 Test: blockdev reset ...[2024-11-20 09:07:04.655812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:09.581 passed 00:10:09.581 Test: blockdev write read 8 blocks ...[2024-11-20 09:07:04.660340] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:09.581 passed 00:10:09.581 Test: blockdev write read size > 128k ...passed 00:10:09.581 Test: blockdev write read invalid size ...passed 00:10:09.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:09.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:09.581 Test: blockdev write read max offset ...passed 00:10:09.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:09.581 Test: blockdev writev readv 8 blocks ...passed 00:10:09.581 Test: blockdev writev readv 30 x 1block ...passed 00:10:09.581 Test: blockdev writev readv block ...passed 00:10:09.581 Test: blockdev writev readv size > 128k ...passed 00:10:09.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:09.581 Test: blockdev comparev and writev ...passed 00:10:09.581 Test: blockdev nvme passthru rw ...[2024-11-20 09:07:04.668355] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:09.581 separate metadata which is not supported yet. 00:10:09.581 passed 00:10:09.581 Test: blockdev nvme passthru vendor specific ...passed 00:10:09.581 Test: blockdev nvme admin passthru ...[2024-11-20 09:07:04.668956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:09.581 [2024-11-20 09:07:04.669058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:09.581 passed 00:10:09.581 Test: blockdev copy ...passed 00:10:09.581 00:10:09.581 Run Summary: Type Total Ran Passed Failed Inactive 00:10:09.581 suites 6 6 n/a 0 0 00:10:09.581 tests 138 138 138 0 0 00:10:09.581 asserts 893 893 893 0 n/a 00:10:09.581 00:10:09.581 Elapsed time = 1.411 seconds 00:10:09.581 0 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61297 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61297 ']' 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61297 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61297 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61297' 00:10:09.840 killing process with pid 61297 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61297 00:10:09.840 09:07:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61297 00:10:10.805 09:07:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:10.805 00:10:10.805 real 0m3.106s 00:10:10.805 user 0m7.868s 00:10:10.805 sys 0m0.537s 00:10:10.805 ************************************ 00:10:10.805 END TEST bdev_bounds 00:10:10.805 ************************************ 00:10:10.805 09:07:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.805 09:07:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 09:07:05 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:10.805 09:07:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:10.805 09:07:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.805 09:07:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 ************************************ 00:10:10.805 START TEST bdev_nbd 00:10:10.805 ************************************ 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61362 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61362 /var/tmp/spdk-nbd.sock 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61362 ']' 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.805 09:07:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:11.064 [2024-11-20 09:07:06.033671] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:11.064 [2024-11-20 09:07:06.033887] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.324 [2024-11-20 09:07:06.225605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.324 [2024-11-20 09:07:06.385962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:12.261 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:12.519 1+0 records in 00:10:12.519 1+0 records out 00:10:12.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674649 s, 6.1 MB/s 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:12.519 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:12.781 1+0 records in 00:10:12.781 1+0 records out 00:10:12.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968944 s, 4.2 MB/s 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:12.781 09:07:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.348 1+0 records in 00:10:13.348 1+0 records out 00:10:13.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007852 s, 5.2 MB/s 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.348 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.607 1+0 records in 00:10:13.607 1+0 records out 00:10:13.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716779 s, 5.7 MB/s 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.607 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:13.865 1+0 records in 00:10:13.865 1+0 records out 00:10:13.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680939 s, 6.0 MB/s 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:13.865 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:13.866 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:13.866 09:07:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:14.124 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:14.124 1+0 records in 00:10:14.124 1+0 records out 00:10:14.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104232 s, 3.9 MB/s 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:10:14.125 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd0", 00:10:14.692 "bdev_name": "Nvme0n1" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd1", 00:10:14.692 "bdev_name": "Nvme1n1" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd2", 00:10:14.692 "bdev_name": "Nvme2n1" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd3", 00:10:14.692 "bdev_name": "Nvme2n2" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd4", 00:10:14.692 "bdev_name": "Nvme2n3" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd5", 00:10:14.692 "bdev_name": "Nvme3n1" 00:10:14.692 } 00:10:14.692 ]' 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd0", 00:10:14.692 "bdev_name": "Nvme0n1" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd1", 00:10:14.692 "bdev_name": "Nvme1n1" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd2", 00:10:14.692 "bdev_name": "Nvme2n1" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd3", 00:10:14.692 "bdev_name": "Nvme2n2" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd4", 00:10:14.692 "bdev_name": "Nvme2n3" 00:10:14.692 }, 00:10:14.692 { 00:10:14.692 "nbd_device": "/dev/nbd5", 00:10:14.692 "bdev_name": "Nvme3n1" 00:10:14.692 } 00:10:14.692 ]' 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.692 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:14.950 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.951 09:07:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.209 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:15.777 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:16.036 09:07:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:16.295 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.554 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:16.812 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:16.813 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:16.813 09:07:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:17.071 /dev/nbd0 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.071 1+0 records in 00:10:17.071 1+0 records out 00:10:17.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523808 s, 7.8 MB/s 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:17.071 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:10:17.379 /dev/nbd1 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.379 1+0 records in 00:10:17.379 1+0 records out 00:10:17.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741222 s, 5.5 MB/s 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:17.379 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:10:17.661 /dev/nbd10 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.661 1+0 records in 00:10:17.661 1+0 records out 00:10:17.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677503 s, 6.0 MB/s 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:17.661 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.662 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:17.662 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:10:17.920 /dev/nbd11 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:17.920 1+0 records in 00:10:17.920 1+0 records out 00:10:17.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666481 s, 6.1 MB/s 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:17.920 09:07:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:10:18.179 /dev/nbd12 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.179 1+0 records in 00:10:18.179 1+0 records out 00:10:18.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793297 s, 5.2 MB/s 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:18.179 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:10:18.437 /dev/nbd13 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:18.437 1+0 records in 00:10:18.437 1+0 records out 00:10:18.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678755 s, 6.0 MB/s 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:18.437 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd0", 00:10:19.005 "bdev_name": "Nvme0n1" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd1", 00:10:19.005 "bdev_name": "Nvme1n1" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd10", 00:10:19.005 "bdev_name": "Nvme2n1" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd11", 00:10:19.005 "bdev_name": "Nvme2n2" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd12", 00:10:19.005 "bdev_name": "Nvme2n3" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd13", 00:10:19.005 "bdev_name": "Nvme3n1" 00:10:19.005 } 00:10:19.005 ]' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd0", 00:10:19.005 "bdev_name": "Nvme0n1" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd1", 00:10:19.005 "bdev_name": "Nvme1n1" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd10", 00:10:19.005 "bdev_name": "Nvme2n1" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd11", 00:10:19.005 "bdev_name": "Nvme2n2" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd12", 00:10:19.005 "bdev_name": "Nvme2n3" 00:10:19.005 }, 00:10:19.005 { 00:10:19.005 "nbd_device": "/dev/nbd13", 00:10:19.005 "bdev_name": "Nvme3n1" 00:10:19.005 } 00:10:19.005 ]' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:19.005 /dev/nbd1 00:10:19.005 /dev/nbd10 00:10:19.005 /dev/nbd11 00:10:19.005 /dev/nbd12 00:10:19.005 /dev/nbd13' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:19.005 /dev/nbd1 00:10:19.005 /dev/nbd10 00:10:19.005 /dev/nbd11 00:10:19.005 /dev/nbd12 00:10:19.005 /dev/nbd13' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:19.005 256+0 records in 00:10:19.005 256+0 records out 00:10:19.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00961898 s, 109 MB/s 00:10:19.005 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.006 09:07:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:19.006 256+0 records in 00:10:19.006 256+0 records out 00:10:19.006 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175381 s, 6.0 MB/s 00:10:19.006 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.006 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:19.264 256+0 records in 00:10:19.264 256+0 records out 00:10:19.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163736 s, 6.4 MB/s 00:10:19.264 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.264 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:19.524 256+0 records in 00:10:19.524 256+0 records out 00:10:19.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168378 s, 6.2 MB/s 00:10:19.524 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.524 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:19.524 256+0 records in 00:10:19.524 256+0 records out 00:10:19.524 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16536 s, 6.3 MB/s 00:10:19.524 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.524 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:19.783 256+0 records in 00:10:19.783 256+0 records out 00:10:19.783 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160614 s, 6.5 MB/s 00:10:19.783 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:19.783 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:20.041 256+0 records in 00:10:20.041 256+0 records out 00:10:20.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201651 s, 5.2 MB/s 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.041 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.299 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.863 09:07:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.121 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.379 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.638 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.896 09:07:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.155 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:22.155 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:22.155 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:22.414 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:22.673 malloc_lvol_verify 00:10:22.673 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:22.932 3f7f802f-0b68-4452-8390-176d5e4e440b 00:10:22.932 09:07:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:23.190 3595ea06-4fae-487d-a85a-1eb6bb46c79e 00:10:23.190 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:23.449 /dev/nbd0 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:23.449 mke2fs 1.47.0 (5-Feb-2023) 00:10:23.449 Discarding device blocks: 0/4096 done 00:10:23.449 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:23.449 00:10:23.449 Allocating group tables: 0/1 done 00:10:23.449 Writing inode tables: 0/1 done 00:10:23.449 Creating journal (1024 blocks): done 00:10:23.449 Writing superblocks and filesystem accounting information: 0/1 done 00:10:23.449 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:23.449 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61362 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61362 ']' 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61362 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61362 00:10:23.707 killing process with pid 61362 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61362' 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61362 00:10:23.707 09:07:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61362 00:10:25.085 ************************************ 00:10:25.085 END TEST bdev_nbd 00:10:25.085 ************************************ 00:10:25.085 09:07:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:25.085 00:10:25.085 real 0m13.976s 00:10:25.085 user 0m19.852s 00:10:25.085 sys 0m4.505s 00:10:25.085 09:07:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.085 09:07:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:25.085 09:07:19 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:10:25.085 09:07:19 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:10:25.085 skipping fio tests on NVMe due to multi-ns failures. 00:10:25.085 09:07:19 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:25.085 09:07:19 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:25.085 09:07:19 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:25.085 09:07:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:25.085 09:07:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.085 09:07:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:25.085 ************************************ 00:10:25.085 START TEST bdev_verify 00:10:25.085 ************************************ 00:10:25.085 09:07:19 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:25.085 [2024-11-20 09:07:20.056241] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:25.085 [2024-11-20 09:07:20.056437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61780 ] 00:10:25.343 [2024-11-20 09:07:20.247327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:25.343 [2024-11-20 09:07:20.393462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.343 [2024-11-20 09:07:20.393479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.278 Running I/O for 5 seconds... 00:10:28.587 17152.00 IOPS, 67.00 MiB/s [2024-11-20T09:07:24.644Z] 17792.00 IOPS, 69.50 MiB/s [2024-11-20T09:07:25.583Z] 17749.33 IOPS, 69.33 MiB/s [2024-11-20T09:07:26.519Z] 17808.00 IOPS, 69.56 MiB/s [2024-11-20T09:07:26.519Z] 17753.60 IOPS, 69.35 MiB/s 00:10:31.399 Latency(us) 00:10:31.399 [2024-11-20T09:07:26.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.399 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x0 length 0xbd0bd 00:10:31.399 Nvme0n1 : 5.05 1444.95 5.64 0.00 0.00 88290.63 20494.89 75783.45 00:10:31.399 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:31.399 Nvme0n1 : 5.06 1491.28 5.83 0.00 0.00 85609.98 17635.14 80073.08 00:10:31.399 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x0 length 0xa0000 00:10:31.399 Nvme1n1 : 5.05 1443.74 5.64 0.00 0.00 88220.56 22282.24 70063.94 00:10:31.399 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0xa0000 length 0xa0000 00:10:31.399 Nvme1n1 : 5.07 1490.68 5.82 0.00 0.00 85450.17 20494.89 77689.95 00:10:31.399 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x0 length 0x80000 00:10:31.399 Nvme2n1 : 5.06 1443.11 5.64 0.00 0.00 88093.14 21567.30 67204.19 00:10:31.399 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x80000 length 0x80000 00:10:31.399 Nvme2n1 : 5.07 1489.47 5.82 0.00 0.00 85336.66 22043.93 74353.57 00:10:31.399 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x0 length 0x80000 00:10:31.399 Nvme2n2 : 5.07 1450.30 5.67 0.00 0.00 87527.88 7685.59 70063.94 00:10:31.399 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x80000 length 0x80000 00:10:31.399 Nvme2n2 : 5.07 1488.90 5.82 0.00 0.00 85201.45 21448.15 72447.07 00:10:31.399 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x0 length 0x80000 00:10:31.399 Nvme2n3 : 5.08 1449.63 5.66 0.00 0.00 87391.18 8579.26 72923.69 00:10:31.399 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x80000 length 0x80000 00:10:31.399 Nvme2n3 : 5.07 1488.37 5.81 0.00 0.00 85059.48 15013.70 76260.07 00:10:31.399 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x0 length 0x20000 00:10:31.399 Nvme3n1 : 5.09 1458.37 5.70 0.00 0.00 86851.10 9830.40 76260.07 00:10:31.399 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:31.399 Verification LBA range: start 0x20000 length 0x20000 00:10:31.399 Nvme3n1 : 5.08 1487.71 5.81 0.00 0.00 84949.60 10604.92 80073.08 00:10:31.399 [2024-11-20T09:07:26.519Z] =================================================================================================================== 00:10:31.399 [2024-11-20T09:07:26.519Z] Total : 17626.52 68.85 0.00 0.00 86479.21 7685.59 80073.08 00:10:32.777 00:10:32.777 real 0m7.552s 00:10:32.777 user 0m13.857s 00:10:32.777 sys 0m0.350s 00:10:32.777 09:07:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.777 09:07:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:32.777 ************************************ 00:10:32.777 END TEST bdev_verify 00:10:32.777 ************************************ 00:10:32.777 09:07:27 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:32.777 09:07:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:32.777 09:07:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.777 09:07:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:32.777 ************************************ 00:10:32.777 START TEST bdev_verify_big_io 00:10:32.777 ************************************ 00:10:32.777 09:07:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:32.777 [2024-11-20 09:07:27.644700] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:32.777 [2024-11-20 09:07:27.644890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:10:32.777 [2024-11-20 09:07:27.818425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.036 [2024-11-20 09:07:27.943891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.036 [2024-11-20 09:07:27.943905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.972 Running I/O for 5 seconds... 00:10:38.635 1744.00 IOPS, 109.00 MiB/s [2024-11-20T09:07:34.691Z] 2672.00 IOPS, 167.00 MiB/s [2024-11-20T09:07:34.691Z] 3174.67 IOPS, 198.42 MiB/s 00:10:39.571 Latency(us) 00:10:39.571 [2024-11-20T09:07:34.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.571 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x0 length 0xbd0b 00:10:39.571 Nvme0n1 : 5.55 138.43 8.65 0.00 0.00 902514.11 18350.08 899868.86 00:10:39.571 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:39.571 Nvme0n1 : 5.72 134.44 8.40 0.00 0.00 878026.52 80549.70 1182031.13 00:10:39.571 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x0 length 0xa000 00:10:39.571 Nvme1n1 : 5.55 138.32 8.64 0.00 0.00 875426.60 90082.21 808356.77 00:10:39.571 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0xa000 length 0xa000 00:10:39.571 Nvme1n1 : 5.72 134.38 8.40 0.00 0.00 849308.75 81026.33 884616.84 00:10:39.571 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x0 length 0x8000 00:10:39.571 Nvme2n1 : 5.63 142.13 8.88 0.00 0.00 827197.78 72447.07 835047.80 00:10:39.571 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x8000 length 0x8000 00:10:39.571 Nvme2n1 : 5.75 144.72 9.05 0.00 0.00 775379.17 22878.02 884616.84 00:10:39.571 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x0 length 0x8000 00:10:39.571 Nvme2n2 : 5.74 152.61 9.54 0.00 0.00 752130.88 28955.00 831234.79 00:10:39.571 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x8000 length 0x8000 00:10:39.571 Nvme2n2 : 5.78 159.09 9.94 0.00 0.00 689407.62 2427.81 1029510.98 00:10:39.571 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x0 length 0x8000 00:10:39.571 Nvme2n3 : 5.76 154.53 9.66 0.00 0.00 721965.60 30980.65 941811.90 00:10:39.571 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x8000 length 0x8000 00:10:39.571 Nvme2n3 : 5.68 132.23 8.26 0.00 0.00 939214.81 23235.49 888429.85 00:10:39.571 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x0 length 0x2000 00:10:39.571 Nvme3n1 : 5.80 173.50 10.84 0.00 0.00 629872.48 916.01 1807363.72 00:10:39.571 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:39.571 Verification LBA range: start 0x2000 length 0x2000 00:10:39.571 Nvme3n1 : 5.68 132.18 8.26 0.00 0.00 913970.72 52190.49 880803.84 00:10:39.571 [2024-11-20T09:07:34.691Z] =================================================================================================================== 00:10:39.571 [2024-11-20T09:07:34.691Z] Total : 1736.57 108.54 0.00 0.00 804196.34 916.01 1807363.72 00:10:41.474 00:10:41.474 real 0m8.674s 00:10:41.474 user 0m16.068s 00:10:41.474 sys 0m0.374s 00:10:41.474 09:07:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.474 09:07:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:41.474 ************************************ 00:10:41.474 END TEST bdev_verify_big_io 00:10:41.474 ************************************ 00:10:41.474 09:07:36 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:41.474 09:07:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:41.474 09:07:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.474 09:07:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.474 ************************************ 00:10:41.474 START TEST bdev_write_zeroes 00:10:41.474 ************************************ 00:10:41.474 09:07:36 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:41.474 [2024-11-20 09:07:36.393753] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:41.474 [2024-11-20 09:07:36.393937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61993 ] 00:10:41.474 [2024-11-20 09:07:36.588113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.733 [2024-11-20 09:07:36.749810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.669 Running I/O for 1 seconds... 00:10:43.604 56000.00 IOPS, 218.75 MiB/s 00:10:43.604 Latency(us) 00:10:43.604 [2024-11-20T09:07:38.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:43.604 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:43.604 Nvme0n1 : 1.03 9250.87 36.14 0.00 0.00 13799.31 10247.45 29431.62 00:10:43.604 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:43.604 Nvme1n1 : 1.03 9236.57 36.08 0.00 0.00 13795.61 10366.60 28597.53 00:10:43.604 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:43.604 Nvme2n1 : 1.03 9222.80 36.03 0.00 0.00 13782.74 10366.60 27763.43 00:10:43.604 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:43.604 Nvme2n2 : 1.04 9208.90 35.97 0.00 0.00 13774.03 9294.20 27048.49 00:10:43.604 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:43.604 Nvme2n3 : 1.04 9195.07 35.92 0.00 0.00 13732.81 6464.23 27286.81 00:10:43.604 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:43.604 Nvme3n1 : 1.04 9119.20 35.62 0.00 0.00 13825.51 11379.43 29669.93 00:10:43.604 [2024-11-20T09:07:38.724Z] =================================================================================================================== 00:10:43.604 [2024-11-20T09:07:38.724Z] Total : 55233.41 215.76 0.00 0.00 13784.96 6464.23 29669.93 00:10:44.980 00:10:44.980 real 0m3.378s 00:10:44.980 user 0m2.878s 00:10:44.980 sys 0m0.375s 00:10:44.980 09:07:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.980 09:07:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:44.980 ************************************ 00:10:44.980 END TEST bdev_write_zeroes 00:10:44.980 ************************************ 00:10:44.980 09:07:39 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:44.980 09:07:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:44.980 09:07:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.980 09:07:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:44.980 ************************************ 00:10:44.980 START TEST bdev_json_nonenclosed 00:10:44.980 ************************************ 00:10:44.980 09:07:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:44.980 [2024-11-20 09:07:39.839425] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:44.980 [2024-11-20 09:07:39.839605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62046 ] 00:10:44.980 [2024-11-20 09:07:40.022417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.239 [2024-11-20 09:07:40.168201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.239 [2024-11-20 09:07:40.168369] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:45.239 [2024-11-20 09:07:40.168399] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:45.239 [2024-11-20 09:07:40.168412] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:45.496 00:10:45.496 real 0m0.715s 00:10:45.496 user 0m0.457s 00:10:45.496 sys 0m0.152s 00:10:45.496 09:07:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.497 09:07:40 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:45.497 ************************************ 00:10:45.497 END TEST bdev_json_nonenclosed 00:10:45.497 ************************************ 00:10:45.497 09:07:40 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:45.497 09:07:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:45.497 09:07:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.497 09:07:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:45.497 ************************************ 00:10:45.497 START TEST bdev_json_nonarray 00:10:45.497 ************************************ 00:10:45.497 09:07:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:45.497 [2024-11-20 09:07:40.609873] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:45.497 [2024-11-20 09:07:40.610049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62077 ] 00:10:45.755 [2024-11-20 09:07:40.794577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.013 [2024-11-20 09:07:40.939795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.013 [2024-11-20 09:07:40.939966] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:46.013 [2024-11-20 09:07:40.939997] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:46.013 [2024-11-20 09:07:40.940011] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:46.272 00:10:46.272 real 0m0.718s 00:10:46.272 user 0m0.444s 00:10:46.272 sys 0m0.169s 00:10:46.272 09:07:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.272 09:07:41 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:46.272 ************************************ 00:10:46.272 END TEST bdev_json_nonarray 00:10:46.272 ************************************ 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:46.272 09:07:41 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:46.272 00:10:46.272 real 0m45.853s 00:10:46.272 user 1m8.395s 00:10:46.272 sys 0m7.913s 00:10:46.272 ************************************ 00:10:46.272 END TEST blockdev_nvme 00:10:46.272 ************************************ 00:10:46.272 09:07:41 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.272 09:07:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.272 09:07:41 -- spdk/autotest.sh@209 -- # uname -s 00:10:46.272 09:07:41 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:46.272 09:07:41 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:46.272 09:07:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.272 09:07:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.272 09:07:41 -- common/autotest_common.sh@10 -- # set +x 00:10:46.272 ************************************ 00:10:46.272 START TEST blockdev_nvme_gpt 00:10:46.272 ************************************ 00:10:46.272 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:46.531 * Looking for test storage... 00:10:46.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.531 09:07:41 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.531 --rc genhtml_branch_coverage=1 00:10:46.531 --rc genhtml_function_coverage=1 00:10:46.531 --rc genhtml_legend=1 00:10:46.531 --rc geninfo_all_blocks=1 00:10:46.531 --rc geninfo_unexecuted_blocks=1 00:10:46.531 00:10:46.531 ' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.531 --rc genhtml_branch_coverage=1 00:10:46.531 --rc genhtml_function_coverage=1 00:10:46.531 --rc genhtml_legend=1 00:10:46.531 --rc geninfo_all_blocks=1 00:10:46.531 --rc geninfo_unexecuted_blocks=1 00:10:46.531 00:10:46.531 ' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.531 --rc genhtml_branch_coverage=1 00:10:46.531 --rc genhtml_function_coverage=1 00:10:46.531 --rc genhtml_legend=1 00:10:46.531 --rc geninfo_all_blocks=1 00:10:46.531 --rc geninfo_unexecuted_blocks=1 00:10:46.531 00:10:46.531 ' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.531 --rc genhtml_branch_coverage=1 00:10:46.531 --rc genhtml_function_coverage=1 00:10:46.531 --rc genhtml_legend=1 00:10:46.531 --rc geninfo_all_blocks=1 00:10:46.531 --rc geninfo_unexecuted_blocks=1 00:10:46.531 00:10:46.531 ' 00:10:46.531 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:46.531 09:07:41 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:46.531 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62161 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62161 00:10:46.532 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62161 ']' 00:10:46.532 09:07:41 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:46.532 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.532 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.532 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.532 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.532 09:07:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:46.790 [2024-11-20 09:07:41.668491] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:46.790 [2024-11-20 09:07:41.668696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62161 ] 00:10:46.790 [2024-11-20 09:07:41.851657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.049 [2024-11-20 09:07:41.996007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.986 09:07:42 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.986 09:07:42 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:47.986 09:07:42 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:47.986 09:07:42 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:10:47.986 09:07:42 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:48.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.504 Waiting for block devices as requested 00:10:48.504 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.504 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.763 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.763 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.035 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:54.035 BYT; 00:10:54.035 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:54.035 BYT; 00:10:54.035 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:54.035 09:07:48 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:54.035 09:07:48 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:54.972 The operation has completed successfully. 00:10:54.972 09:07:49 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:55.909 The operation has completed successfully. 00:10:55.909 09:07:50 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:56.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:57.045 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.045 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.045 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.303 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:57.303 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:57.303 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.303 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.303 [] 00:10:57.303 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.303 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:57.303 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:57.303 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:57.303 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:57.304 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:57.304 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.304 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.562 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.562 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:10:57.562 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.562 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.562 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.821 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.821 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:57.821 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:57.821 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:57.821 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.821 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:57.821 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:57.822 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6dfca93b-03dc-41dc-9092-d0c7f77fe804"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6dfca93b-03dc-41dc-9092-d0c7f77fe804",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d21acc50-9278-4e4e-bee9-8d1aa166d816"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d21acc50-9278-4e4e-bee9-8d1aa166d816",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f4ed8f53-a1d4-436a-bc7f-eae2477c0ce5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4ed8f53-a1d4-436a-bc7f-eae2477c0ce5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "bd9ef518-fdb4-4f12-8de4-d4d0bbcb512e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd9ef518-fdb4-4f12-8de4-d4d0bbcb512e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a847c57c-aec0-4681-a4b4-d097a2651120"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a847c57c-aec0-4681-a4b4-d097a2651120",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:57.822 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:57.822 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:10:57.822 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:57.822 09:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62161 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62161 ']' 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62161 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62161 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.822 killing process with pid 62161 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62161' 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62161 00:10:57.822 09:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62161 00:11:00.387 09:07:55 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:00.387 09:07:55 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:00.387 09:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:00.387 09:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.387 09:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:00.387 ************************************ 00:11:00.387 START TEST bdev_hello_world 00:11:00.387 ************************************ 00:11:00.387 09:07:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:11:00.387 [2024-11-20 09:07:55.163385] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:00.387 [2024-11-20 09:07:55.163540] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62797 ] 00:11:00.387 [2024-11-20 09:07:55.330427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.387 [2024-11-20 09:07:55.469755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.323 [2024-11-20 09:07:56.137773] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:01.323 [2024-11-20 09:07:56.137838] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:11:01.323 [2024-11-20 09:07:56.137869] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:01.323 [2024-11-20 09:07:56.141302] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:01.323 [2024-11-20 09:07:56.141823] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:01.323 [2024-11-20 09:07:56.141861] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:01.323 [2024-11-20 09:07:56.142062] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:01.323 00:11:01.323 [2024-11-20 09:07:56.142096] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:02.260 00:11:02.260 real 0m2.088s 00:11:02.260 user 0m1.681s 00:11:02.260 sys 0m0.298s 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:02.260 ************************************ 00:11:02.260 END TEST bdev_hello_world 00:11:02.260 ************************************ 00:11:02.260 09:07:57 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:02.260 09:07:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.260 09:07:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.260 09:07:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:02.260 ************************************ 00:11:02.260 START TEST bdev_bounds 00:11:02.260 ************************************ 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62839 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:02.260 Process bdevio pid: 62839 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62839' 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62839 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62839 ']' 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.260 09:07:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:02.260 [2024-11-20 09:07:57.313504] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:02.260 [2024-11-20 09:07:57.313784] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:11:02.520 [2024-11-20 09:07:57.489416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.520 [2024-11-20 09:07:57.631170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.520 [2024-11-20 09:07:57.631299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.520 [2024-11-20 09:07:57.631308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.457 09:07:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.457 09:07:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:11:03.457 09:07:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:03.457 I/O targets: 00:11:03.457 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:03.457 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:11:03.457 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:11:03.457 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:03.457 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:03.457 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:03.457 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:03.457 00:11:03.457 00:11:03.457 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.457 http://cunit.sourceforge.net/ 00:11:03.457 00:11:03.457 00:11:03.457 Suite: bdevio tests on: Nvme3n1 00:11:03.457 Test: blockdev write read block ...passed 00:11:03.457 Test: blockdev write zeroes read block ...passed 00:11:03.457 Test: blockdev write zeroes read no split ...passed 00:11:03.716 Test: blockdev write zeroes read split ...passed 00:11:03.716 Test: blockdev write zeroes read split partial ...passed 00:11:03.716 Test: blockdev reset ...[2024-11-20 09:07:58.603971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:11:03.716 [2024-11-20 09:07:58.609321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:11:03.716 passed 00:11:03.716 Test: blockdev write read 8 blocks ...passed 00:11:03.716 Test: blockdev write read size > 128k ...passed 00:11:03.716 Test: blockdev write read invalid size ...passed 00:11:03.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.716 Test: blockdev write read max offset ...passed 00:11:03.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.716 Test: blockdev writev readv 8 blocks ...passed 00:11:03.716 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.716 Test: blockdev writev readv block ...passed 00:11:03.716 Test: blockdev writev readv size > 128k ...passed 00:11:03.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.716 Test: blockdev comparev and writev ...[2024-11-20 09:07:58.619980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5404000 len:0x1000 00:11:03.716 [2024-11-20 09:07:58.620071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.716 passed 00:11:03.716 Test: blockdev nvme passthru rw ...passed 00:11:03.716 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:58.621098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.716 [2024-11-20 09:07:58.621162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.716 passed 00:11:03.716 Test: blockdev nvme admin passthru ...passed 00:11:03.716 Test: blockdev copy ...passed 00:11:03.716 Suite: bdevio tests on: Nvme2n3 00:11:03.716 Test: blockdev write read block ...passed 00:11:03.716 Test: blockdev write zeroes read block ...passed 00:11:03.716 Test: blockdev write zeroes read no split ...passed 00:11:03.716 Test: blockdev write zeroes read split ...passed 00:11:03.716 Test: blockdev write zeroes read split partial ...passed 00:11:03.716 Test: blockdev reset ...[2024-11-20 09:07:58.697759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:03.716 [2024-11-20 09:07:58.702779] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:03.716 passed 00:11:03.716 Test: blockdev write read 8 blocks ...passed 00:11:03.716 Test: blockdev write read size > 128k ...passed 00:11:03.716 Test: blockdev write read invalid size ...passed 00:11:03.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.716 Test: blockdev write read max offset ...passed 00:11:03.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.716 Test: blockdev writev readv 8 blocks ...passed 00:11:03.716 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.716 Test: blockdev writev readv block ...passed 00:11:03.716 Test: blockdev writev readv size > 128k ...passed 00:11:03.716 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.716 Test: blockdev comparev and writev ...[2024-11-20 09:07:58.713507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5402000 len:0x1000 00:11:03.716 [2024-11-20 09:07:58.713568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.716 passed 00:11:03.716 Test: blockdev nvme passthru rw ...passed 00:11:03.716 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:58.714484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.716 passed 00:11:03.716 Test: blockdev nvme admin passthru ...[2024-11-20 09:07:58.714533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.716 passed 00:11:03.716 Test: blockdev copy ...passed 00:11:03.716 Suite: bdevio tests on: Nvme2n2 00:11:03.716 Test: blockdev write read block ...passed 00:11:03.716 Test: blockdev write zeroes read block ...passed 00:11:03.716 Test: blockdev write zeroes read no split ...passed 00:11:03.716 Test: blockdev write zeroes read split ...passed 00:11:03.716 Test: blockdev write zeroes read split partial ...passed 00:11:03.716 Test: blockdev reset ...[2024-11-20 09:07:58.819326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:03.716 [2024-11-20 09:07:58.824038] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:03.716 passed 00:11:03.716 Test: blockdev write read 8 blocks ...passed 00:11:03.716 Test: blockdev write read size > 128k ...passed 00:11:03.716 Test: blockdev write read invalid size ...passed 00:11:03.716 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.716 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.716 Test: blockdev write read max offset ...passed 00:11:03.716 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.716 Test: blockdev writev readv 8 blocks ...passed 00:11:03.716 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.717 Test: blockdev writev readv block ...passed 00:11:03.717 Test: blockdev writev readv size > 128k ...passed 00:11:03.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.717 Test: blockdev comparev and writev ...[2024-11-20 09:07:58.833445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8238000 len:0x1000 00:11:03.977 [2024-11-20 09:07:58.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.977 passed 00:11:03.977 Test: blockdev nvme passthru rw ...passed 00:11:03.977 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:58.834460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.977 [2024-11-20 09:07:58.834502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.977 passed 00:11:03.977 Test: blockdev nvme admin passthru ...passed 00:11:03.977 Test: blockdev copy ...passed 00:11:03.977 Suite: bdevio tests on: Nvme2n1 00:11:03.977 Test: blockdev write read block ...passed 00:11:03.977 Test: blockdev write zeroes read block ...passed 00:11:03.977 Test: blockdev write zeroes read no split ...passed 00:11:03.977 Test: blockdev write zeroes read split ...passed 00:11:03.977 Test: blockdev write zeroes read split partial ...passed 00:11:03.977 Test: blockdev reset ...[2024-11-20 09:07:58.923713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:11:03.977 [2024-11-20 09:07:58.928191] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:11:03.977 passed 00:11:03.977 Test: blockdev write read 8 blocks ...passed 00:11:03.977 Test: blockdev write read size > 128k ...passed 00:11:03.977 Test: blockdev write read invalid size ...passed 00:11:03.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.977 Test: blockdev write read max offset ...passed 00:11:03.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.977 Test: blockdev writev readv 8 blocks ...passed 00:11:03.977 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.977 Test: blockdev writev readv block ...passed 00:11:03.977 Test: blockdev writev readv size > 128k ...passed 00:11:03.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.977 Test: blockdev comparev and writev ...[2024-11-20 09:07:58.937294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8234000 len:0x1000 00:11:03.977 [2024-11-20 09:07:58.937357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.977 passed 00:11:03.977 Test: blockdev nvme passthru rw ...passed 00:11:03.977 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.977 Test: blockdev nvme admin passthru ...[2024-11-20 09:07:58.938260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:11:03.977 [2024-11-20 09:07:58.938305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:11:03.977 passed 00:11:03.977 Test: blockdev copy ...passed 00:11:03.977 Suite: bdevio tests on: Nvme1n1p2 00:11:03.977 Test: blockdev write read block ...passed 00:11:03.977 Test: blockdev write zeroes read block ...passed 00:11:03.977 Test: blockdev write zeroes read no split ...passed 00:11:03.977 Test: blockdev write zeroes read split ...passed 00:11:03.977 Test: blockdev write zeroes read split partial ...passed 00:11:03.977 Test: blockdev reset ...[2024-11-20 09:07:59.034423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:03.977 [2024-11-20 09:07:59.038356] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:11:03.977 passed 00:11:03.977 Test: blockdev write read 8 blocks ...passed 00:11:03.977 Test: blockdev write read size > 128k ...passed 00:11:03.977 Test: blockdev write read invalid size ...passed 00:11:03.977 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:03.977 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:03.977 Test: blockdev write read max offset ...passed 00:11:03.977 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:03.977 Test: blockdev writev readv 8 blocks ...passed 00:11:03.977 Test: blockdev writev readv 30 x 1block ...passed 00:11:03.977 Test: blockdev writev readv block ...passed 00:11:03.977 Test: blockdev writev readv size > 128k ...passed 00:11:03.977 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:03.977 Test: blockdev comparev and writev ...[2024-11-20 09:07:59.049196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d8230000 len:0x1000 00:11:03.977 [2024-11-20 09:07:59.049259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:03.977 passed 00:11:03.977 Test: blockdev nvme passthru rw ...passed 00:11:03.977 Test: blockdev nvme passthru vendor specific ...passed 00:11:03.977 Test: blockdev nvme admin passthru ...passed 00:11:03.977 Test: blockdev copy ...passed 00:11:03.977 Suite: bdevio tests on: Nvme1n1p1 00:11:03.977 Test: blockdev write read block ...passed 00:11:03.977 Test: blockdev write zeroes read block ...passed 00:11:03.977 Test: blockdev write zeroes read no split ...passed 00:11:04.236 Test: blockdev write zeroes read split ...passed 00:11:04.236 Test: blockdev write zeroes read split partial ...passed 00:11:04.236 Test: blockdev reset ...[2024-11-20 09:07:59.134723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:11:04.236 [2024-11-20 09:07:59.138615] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:11:04.236 Test: blockdev write read 8 blocks ...uccessful. 00:11:04.236 passed 00:11:04.236 Test: blockdev write read size > 128k ...passed 00:11:04.236 Test: blockdev write read invalid size ...passed 00:11:04.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:04.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:04.236 Test: blockdev write read max offset ...passed 00:11:04.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:04.236 Test: blockdev writev readv 8 blocks ...passed 00:11:04.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:04.236 Test: blockdev writev readv block ...passed 00:11:04.236 Test: blockdev writev readv size > 128k ...passed 00:11:04.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:04.236 Test: blockdev comparev and writev ...[2024-11-20 09:07:59.148042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c5e0e000 len:0x1000 00:11:04.236 [2024-11-20 09:07:59.148101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:11:04.236 passed 00:11:04.236 Test: blockdev nvme passthru rw ...passed 00:11:04.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:04.236 Test: blockdev nvme admin passthru ...passed 00:11:04.236 Test: blockdev copy ...passed 00:11:04.236 Suite: bdevio tests on: Nvme0n1 00:11:04.236 Test: blockdev write read block ...passed 00:11:04.236 Test: blockdev write zeroes read block ...passed 00:11:04.236 Test: blockdev write zeroes read no split ...passed 00:11:04.236 Test: blockdev write zeroes read split ...passed 00:11:04.236 Test: blockdev write zeroes read split partial ...passed 00:11:04.236 Test: blockdev reset ...[2024-11-20 09:07:59.292663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:04.236 [2024-11-20 09:07:59.296747] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:04.236 passed 00:11:04.236 Test: blockdev write read 8 blocks ...passed 00:11:04.236 Test: blockdev write read size > 128k ...passed 00:11:04.236 Test: blockdev write read invalid size ...passed 00:11:04.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:04.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:04.236 Test: blockdev write read max offset ...passed 00:11:04.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:04.236 Test: blockdev writev readv 8 blocks ...passed 00:11:04.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:04.236 Test: blockdev writev readv block ...passed 00:11:04.236 Test: blockdev writev readv size > 128k ...passed 00:11:04.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:04.236 Test: blockdev comparev and writev ...[2024-11-20 09:07:59.307235] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:11:04.236 separate metadata which is not supported yet. 00:11:04.236 passed 00:11:04.236 Test: blockdev nvme passthru rw ...passed 00:11:04.236 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:07:59.308117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:11:04.236 Test: blockdev nvme admin passthru ...RP2 0x0 00:11:04.236 [2024-11-20 09:07:59.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:11:04.236 passed 00:11:04.236 Test: blockdev copy ...passed 00:11:04.236 00:11:04.236 Run Summary: Type Total Ran Passed Failed Inactive 00:11:04.236 suites 7 7 n/a 0 0 00:11:04.236 tests 161 161 161 0 0 00:11:04.236 asserts 1025 1025 1025 0 n/a 00:11:04.236 00:11:04.236 Elapsed time = 2.099 seconds 00:11:04.236 0 00:11:04.236 09:07:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62839 00:11:04.236 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62839 ']' 00:11:04.236 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62839 00:11:04.236 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:11:04.236 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.236 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62839 00:11:04.496 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.496 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.496 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62839' 00:11:04.496 killing process with pid 62839 00:11:04.496 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62839 00:11:04.496 09:07:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62839 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:05.465 00:11:05.465 real 0m3.143s 00:11:05.465 user 0m7.956s 00:11:05.465 sys 0m0.489s 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.465 ************************************ 00:11:05.465 END TEST bdev_bounds 00:11:05.465 ************************************ 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:05.465 09:08:00 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:05.465 09:08:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:05.465 09:08:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.465 09:08:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:05.465 ************************************ 00:11:05.465 START TEST bdev_nbd 00:11:05.465 ************************************ 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62904 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62904 /var/tmp/spdk-nbd.sock 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62904 ']' 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:05.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.465 09:08:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:05.465 [2024-11-20 09:08:00.538821] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:05.465 [2024-11-20 09:08:00.539320] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.724 [2024-11-20 09:08:00.733867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.982 [2024-11-20 09:08:00.872168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:06.548 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:11:06.806 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:06.806 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.064 1+0 records in 00:11:07.064 1+0 records out 00:11:07.064 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047808 s, 8.6 MB/s 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.064 09:08:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.321 1+0 records in 00:11:07.321 1+0 records out 00:11:07.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759319 s, 5.4 MB/s 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.321 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.579 1+0 records in 00:11:07.579 1+0 records out 00:11:07.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870836 s, 4.7 MB/s 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.579 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.837 1+0 records in 00:11:07.837 1+0 records out 00:11:07.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789281 s, 5.2 MB/s 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:07.837 09:08:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:11:08.403 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.404 1+0 records in 00:11:08.404 1+0 records out 00:11:08.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641095 s, 6.4 MB/s 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.404 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.662 1+0 records in 00:11:08.662 1+0 records out 00:11:08.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753685 s, 5.4 MB/s 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.662 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.952 1+0 records in 00:11:08.952 1+0 records out 00:11:08.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136179 s, 3.0 MB/s 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:11:08.952 09:08:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:09.210 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd0", 00:11:09.210 "bdev_name": "Nvme0n1" 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd1", 00:11:09.210 "bdev_name": "Nvme1n1p1" 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd2", 00:11:09.210 "bdev_name": "Nvme1n1p2" 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd3", 00:11:09.210 "bdev_name": "Nvme2n1" 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd4", 00:11:09.210 "bdev_name": "Nvme2n2" 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd5", 00:11:09.210 "bdev_name": "Nvme2n3" 00:11:09.210 }, 00:11:09.210 { 00:11:09.210 "nbd_device": "/dev/nbd6", 00:11:09.210 "bdev_name": "Nvme3n1" 00:11:09.210 } 00:11:09.210 ]' 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd0", 00:11:09.211 "bdev_name": "Nvme0n1" 00:11:09.211 }, 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd1", 00:11:09.211 "bdev_name": "Nvme1n1p1" 00:11:09.211 }, 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd2", 00:11:09.211 "bdev_name": "Nvme1n1p2" 00:11:09.211 }, 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd3", 00:11:09.211 "bdev_name": "Nvme2n1" 00:11:09.211 }, 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd4", 00:11:09.211 "bdev_name": "Nvme2n2" 00:11:09.211 }, 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd5", 00:11:09.211 "bdev_name": "Nvme2n3" 00:11:09.211 }, 00:11:09.211 { 00:11:09.211 "nbd_device": "/dev/nbd6", 00:11:09.211 "bdev_name": "Nvme3n1" 00:11:09.211 } 00:11:09.211 ]' 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.211 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.469 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.727 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.987 09:08:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.247 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.503 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.761 09:08:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.018 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:11.276 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:11.276 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:11.276 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:11.534 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:11:11.792 /dev/nbd0 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:11.792 1+0 records in 00:11:11.792 1+0 records out 00:11:11.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528253 s, 7.8 MB/s 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:11.792 09:08:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:11:12.050 /dev/nbd1 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.050 1+0 records in 00:11:12.050 1+0 records out 00:11:12.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565199 s, 7.2 MB/s 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.050 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:11:12.309 /dev/nbd10 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.309 1+0 records in 00:11:12.309 1+0 records out 00:11:12.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063269 s, 6.5 MB/s 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.309 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:11:12.568 /dev/nbd11 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.568 1+0 records in 00:11:12.568 1+0 records out 00:11:12.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055158 s, 7.4 MB/s 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:12.568 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:11:13.134 /dev/nbd12 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.134 1+0 records in 00:11:13.134 1+0 records out 00:11:13.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737317 s, 5.6 MB/s 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:13.134 09:08:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.134 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:13.134 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:13.134 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.134 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:13.134 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:11:13.391 /dev/nbd13 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:13.391 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.392 1+0 records in 00:11:13.392 1+0 records out 00:11:13.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00379411 s, 1.1 MB/s 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:13.392 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:11:13.650 /dev/nbd14 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:13.650 1+0 records in 00:11:13.650 1+0 records out 00:11:13.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743946 s, 5.5 MB/s 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.650 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd0", 00:11:13.909 "bdev_name": "Nvme0n1" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd1", 00:11:13.909 "bdev_name": "Nvme1n1p1" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd10", 00:11:13.909 "bdev_name": "Nvme1n1p2" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd11", 00:11:13.909 "bdev_name": "Nvme2n1" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd12", 00:11:13.909 "bdev_name": "Nvme2n2" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd13", 00:11:13.909 "bdev_name": "Nvme2n3" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd14", 00:11:13.909 "bdev_name": "Nvme3n1" 00:11:13.909 } 00:11:13.909 ]' 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd0", 00:11:13.909 "bdev_name": "Nvme0n1" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd1", 00:11:13.909 "bdev_name": "Nvme1n1p1" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd10", 00:11:13.909 "bdev_name": "Nvme1n1p2" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd11", 00:11:13.909 "bdev_name": "Nvme2n1" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd12", 00:11:13.909 "bdev_name": "Nvme2n2" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd13", 00:11:13.909 "bdev_name": "Nvme2n3" 00:11:13.909 }, 00:11:13.909 { 00:11:13.909 "nbd_device": "/dev/nbd14", 00:11:13.909 "bdev_name": "Nvme3n1" 00:11:13.909 } 00:11:13.909 ]' 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:13.909 /dev/nbd1 00:11:13.909 /dev/nbd10 00:11:13.909 /dev/nbd11 00:11:13.909 /dev/nbd12 00:11:13.909 /dev/nbd13 00:11:13.909 /dev/nbd14' 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:13.909 /dev/nbd1 00:11:13.909 /dev/nbd10 00:11:13.909 /dev/nbd11 00:11:13.909 /dev/nbd12 00:11:13.909 /dev/nbd13 00:11:13.909 /dev/nbd14' 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:11:13.909 09:08:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:13.909 256+0 records in 00:11:13.909 256+0 records out 00:11:13.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0084918 s, 123 MB/s 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:13.909 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:14.170 256+0 records in 00:11:14.170 256+0 records out 00:11:14.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177918 s, 5.9 MB/s 00:11:14.170 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.170 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:14.431 256+0 records in 00:11:14.431 256+0 records out 00:11:14.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185951 s, 5.6 MB/s 00:11:14.431 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.431 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:14.697 256+0 records in 00:11:14.697 256+0 records out 00:11:14.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188235 s, 5.6 MB/s 00:11:14.697 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.697 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:14.697 256+0 records in 00:11:14.697 256+0 records out 00:11:14.697 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1889 s, 5.6 MB/s 00:11:14.697 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.697 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:14.980 256+0 records in 00:11:14.980 256+0 records out 00:11:14.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187547 s, 5.6 MB/s 00:11:14.980 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:14.980 09:08:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:15.240 256+0 records in 00:11:15.240 256+0 records out 00:11:15.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179135 s, 5.9 MB/s 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:15.240 256+0 records in 00:11:15.240 256+0 records out 00:11:15.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185048 s, 5.7 MB/s 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.240 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.499 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:15.757 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:15.757 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:15.757 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:15.758 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.016 09:08:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:16.274 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:16.274 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:16.274 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:16.274 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.274 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.274 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:16.275 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.275 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.275 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.275 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.533 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:16.792 09:08:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:17.050 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:17.308 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.309 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:17.567 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:17.567 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:17.567 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:11:17.826 09:08:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:18.085 malloc_lvol_verify 00:11:18.085 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:18.344 2d0a1b34-e208-4c8a-9fd7-3e5006a0a33e 00:11:18.344 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:18.602 f402034b-4560-4367-937c-3c372d6a05fb 00:11:18.602 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:18.861 /dev/nbd0 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:11:18.861 mke2fs 1.47.0 (5-Feb-2023) 00:11:18.861 Discarding device blocks: 0/4096 done 00:11:18.861 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:18.861 00:11:18.861 Allocating group tables: 0/1 done 00:11:18.861 Writing inode tables: 0/1 done 00:11:18.861 Creating journal (1024 blocks): done 00:11:18.861 Writing superblocks and filesystem accounting information: 0/1 done 00:11:18.861 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:18.861 09:08:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62904 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62904 ']' 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62904 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62904 00:11:19.120 killing process with pid 62904 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62904' 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62904 00:11:19.120 09:08:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62904 00:11:20.496 ************************************ 00:11:20.496 END TEST bdev_nbd 00:11:20.496 ************************************ 00:11:20.496 09:08:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:20.496 00:11:20.496 real 0m15.024s 00:11:20.496 user 0m21.385s 00:11:20.496 sys 0m4.867s 00:11:20.496 09:08:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.496 09:08:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:20.496 09:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:20.496 09:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:11:20.496 09:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:11:20.496 skipping fio tests on NVMe due to multi-ns failures. 00:11:20.496 09:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:11:20.496 09:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:20.496 09:08:15 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:20.496 09:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:20.496 09:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.496 09:08:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:20.496 ************************************ 00:11:20.496 START TEST bdev_verify 00:11:20.496 ************************************ 00:11:20.496 09:08:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:11:20.496 [2024-11-20 09:08:15.607731] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:20.496 [2024-11-20 09:08:15.607936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:11:20.755 [2024-11-20 09:08:15.799564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.014 [2024-11-20 09:08:15.954417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.014 [2024-11-20 09:08:15.954428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.582 Running I/O for 5 seconds... 00:11:23.922 17920.00 IOPS, 70.00 MiB/s [2024-11-20T09:08:19.976Z] 18112.00 IOPS, 70.75 MiB/s [2024-11-20T09:08:20.912Z] 18197.33 IOPS, 71.08 MiB/s [2024-11-20T09:08:21.848Z] 18080.00 IOPS, 70.62 MiB/s [2024-11-20T09:08:21.848Z] 18150.40 IOPS, 70.90 MiB/s 00:11:26.728 Latency(us) 00:11:26.728 [2024-11-20T09:08:21.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.728 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0xbd0bd 00:11:26.728 Nvme0n1 : 5.09 1306.39 5.10 0.00 0.00 97775.16 21686.46 89128.96 00:11:26.728 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:11:26.728 Nvme0n1 : 5.10 1255.47 4.90 0.00 0.00 101703.35 22878.02 91035.46 00:11:26.728 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0x4ff80 00:11:26.728 Nvme1n1p1 : 5.10 1305.93 5.10 0.00 0.00 97667.55 18707.55 90082.21 00:11:26.728 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x4ff80 length 0x4ff80 00:11:26.728 Nvme1n1p1 : 5.10 1254.94 4.90 0.00 0.00 101500.19 24665.37 89605.59 00:11:26.728 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0x4ff7f 00:11:26.728 Nvme1n1p2 : 5.10 1305.08 5.10 0.00 0.00 97545.27 20137.43 90082.21 00:11:26.728 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:11:26.728 Nvme1n1p2 : 5.11 1253.47 4.90 0.00 0.00 101386.48 26452.71 85315.96 00:11:26.728 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0x80000 00:11:26.728 Nvme2n1 : 5.10 1304.30 5.09 0.00 0.00 97418.50 21328.99 88175.71 00:11:26.728 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x80000 length 0x80000 00:11:26.728 Nvme2n1 : 5.11 1252.55 4.89 0.00 0.00 101257.30 28240.06 82456.20 00:11:26.728 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0x80000 00:11:26.728 Nvme2n2 : 5.10 1303.99 5.09 0.00 0.00 97259.03 20614.05 86269.21 00:11:26.728 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x80000 length 0x80000 00:11:26.728 Nvme2n2 : 5.11 1251.89 4.89 0.00 0.00 101115.98 25261.15 82932.83 00:11:26.728 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0x80000 00:11:26.728 Nvme2n3 : 5.11 1303.57 5.09 0.00 0.00 97108.80 19541.64 84362.71 00:11:26.728 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x80000 length 0x80000 00:11:26.728 Nvme2n3 : 5.11 1251.40 4.89 0.00 0.00 100959.27 17992.61 87222.46 00:11:26.728 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x0 length 0x20000 00:11:26.728 Nvme3n1 : 5.11 1302.76 5.09 0.00 0.00 96907.43 13285.93 88175.71 00:11:26.728 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:11:26.728 Verification LBA range: start 0x20000 length 0x20000 00:11:26.728 Nvme3n1 : 5.12 1250.94 4.89 0.00 0.00 100831.46 14537.08 91035.46 00:11:26.728 [2024-11-20T09:08:21.848Z] =================================================================================================================== 00:11:26.728 [2024-11-20T09:08:21.848Z] Total : 17902.67 69.93 0.00 0.00 99278.93 13285.93 91035.46 00:11:28.107 00:11:28.107 real 0m7.597s 00:11:28.107 user 0m13.930s 00:11:28.107 sys 0m0.334s 00:11:28.107 09:08:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.107 09:08:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:11:28.107 ************************************ 00:11:28.107 END TEST bdev_verify 00:11:28.107 ************************************ 00:11:28.107 09:08:23 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:28.107 09:08:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:11:28.107 09:08:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.107 09:08:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:28.107 ************************************ 00:11:28.107 START TEST bdev_verify_big_io 00:11:28.107 ************************************ 00:11:28.107 09:08:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:28.365 [2024-11-20 09:08:23.254343] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:28.365 [2024-11-20 09:08:23.254583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63457 ] 00:11:28.365 [2024-11-20 09:08:23.435805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.625 [2024-11-20 09:08:23.565302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.625 [2024-11-20 09:08:23.565335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.561 Running I/O for 5 seconds... 00:11:34.660 1915.00 IOPS, 119.69 MiB/s [2024-11-20T09:08:30.346Z] 3203.50 IOPS, 200.22 MiB/s [2024-11-20T09:08:30.346Z] 3873.00 IOPS, 242.06 MiB/s 00:11:35.226 Latency(us) 00:11:35.226 [2024-11-20T09:08:30.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.226 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.226 Verification LBA range: start 0x0 length 0xbd0b 00:11:35.226 Nvme0n1 : 5.67 131.28 8.21 0.00 0.00 937826.56 18469.24 999006.95 00:11:35.226 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.226 Verification LBA range: start 0xbd0b length 0xbd0b 00:11:35.227 Nvme0n1 : 5.64 147.17 9.20 0.00 0.00 842418.98 22043.93 1395559.33 00:11:35.227 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x0 length 0x4ff8 00:11:35.227 Nvme1n1p1 : 5.73 134.14 8.38 0.00 0.00 893023.49 82456.20 846486.81 00:11:35.227 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x4ff8 length 0x4ff8 00:11:35.227 Nvme1n1p1 : 5.64 154.92 9.68 0.00 0.00 779455.76 81026.33 777852.74 00:11:35.227 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x0 length 0x4ff7 00:11:35.227 Nvme1n1p2 : 5.74 130.90 8.18 0.00 0.00 904707.30 54811.93 1311673.25 00:11:35.227 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x4ff7 length 0x4ff7 00:11:35.227 Nvme1n1p2 : 5.70 158.13 9.88 0.00 0.00 751290.00 102474.47 777852.74 00:11:35.227 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x0 length 0x8000 00:11:35.227 Nvme2n1 : 5.79 136.77 8.55 0.00 0.00 846218.40 34078.72 1334551.27 00:11:35.227 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x8000 length 0x8000 00:11:35.227 Nvme2n1 : 5.70 157.40 9.84 0.00 0.00 735651.20 76736.70 777852.74 00:11:35.227 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x0 length 0x8000 00:11:35.227 Nvme2n2 : 5.79 135.68 8.48 0.00 0.00 827821.10 33840.41 1555705.48 00:11:35.227 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x8000 length 0x8000 00:11:35.227 Nvme2n2 : 5.75 167.09 10.44 0.00 0.00 687549.27 27644.28 835047.80 00:11:35.227 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x0 length 0x8000 00:11:35.227 Nvme2n3 : 5.81 152.87 9.55 0.00 0.00 719056.02 12690.15 850299.81 00:11:35.227 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x8000 length 0x8000 00:11:35.227 Nvme2n3 : 5.76 172.93 10.81 0.00 0.00 652674.62 21448.15 793104.76 00:11:35.227 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x0 length 0x2000 00:11:35.227 Nvme3n1 : 5.88 176.42 11.03 0.00 0.00 611719.71 930.91 1624339.55 00:11:35.227 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:35.227 Verification LBA range: start 0x2000 length 0x2000 00:11:35.227 Nvme3n1 : 5.77 182.61 11.41 0.00 0.00 605866.01 3961.95 789291.75 00:11:35.227 [2024-11-20T09:08:30.347Z] =================================================================================================================== 00:11:35.227 [2024-11-20T09:08:30.347Z] Total : 2138.30 133.64 0.00 0.00 759333.68 930.91 1624339.55 00:11:37.132 00:11:37.132 real 0m8.868s 00:11:37.132 user 0m16.446s 00:11:37.132 sys 0m0.380s 00:11:37.132 09:08:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.132 ************************************ 00:11:37.132 END TEST bdev_verify_big_io 00:11:37.132 ************************************ 00:11:37.132 09:08:32 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.132 09:08:32 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:37.132 09:08:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:37.132 09:08:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.132 09:08:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:37.132 ************************************ 00:11:37.132 START TEST bdev_write_zeroes 00:11:37.132 ************************************ 00:11:37.132 09:08:32 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:37.132 [2024-11-20 09:08:32.200054] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:37.132 [2024-11-20 09:08:32.200271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63572 ] 00:11:37.391 [2024-11-20 09:08:32.386287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.651 [2024-11-20 09:08:32.516019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.217 Running I/O for 1 seconds... 00:11:39.149 56000.00 IOPS, 218.75 MiB/s 00:11:39.149 Latency(us) 00:11:39.149 [2024-11-20T09:08:34.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.150 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme0n1 : 1.03 7957.22 31.08 0.00 0.00 16044.69 10902.81 27882.59 00:11:39.150 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme1n1p1 : 1.03 7947.19 31.04 0.00 0.00 16035.44 13345.51 27286.81 00:11:39.150 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme1n1p2 : 1.03 7937.63 31.01 0.00 0.00 16021.27 12630.57 26571.87 00:11:39.150 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme2n1 : 1.03 7928.86 30.97 0.00 0.00 15990.20 10902.81 25737.77 00:11:39.150 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme2n2 : 1.03 7919.29 30.93 0.00 0.00 15986.83 10366.60 25261.15 00:11:39.150 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme2n3 : 1.04 7910.54 30.90 0.00 0.00 15980.31 10545.34 26333.56 00:11:39.150 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:39.150 Nvme3n1 : 1.04 7902.08 30.87 0.00 0.00 15970.60 10366.60 28120.90 00:11:39.150 [2024-11-20T09:08:34.270Z] =================================================================================================================== 00:11:39.150 [2024-11-20T09:08:34.270Z] Total : 55502.82 216.81 0.00 0.00 16004.19 10366.60 28120.90 00:11:40.527 00:11:40.527 real 0m3.387s 00:11:40.527 user 0m2.939s 00:11:40.527 sys 0m0.324s 00:11:40.527 09:08:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.527 ************************************ 00:11:40.527 END TEST bdev_write_zeroes 00:11:40.527 ************************************ 00:11:40.527 09:08:35 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:40.527 09:08:35 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:40.527 09:08:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:40.527 09:08:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.527 09:08:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:40.527 ************************************ 00:11:40.527 START TEST bdev_json_nonenclosed 00:11:40.527 ************************************ 00:11:40.527 09:08:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:40.527 [2024-11-20 09:08:35.633795] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:40.527 [2024-11-20 09:08:35.633992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63630 ] 00:11:40.788 [2024-11-20 09:08:35.814595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.047 [2024-11-20 09:08:35.964543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.047 [2024-11-20 09:08:35.964716] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:41.047 [2024-11-20 09:08:35.964745] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:41.047 [2024-11-20 09:08:35.964759] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:41.306 00:11:41.306 real 0m0.712s 00:11:41.306 user 0m0.437s 00:11:41.306 sys 0m0.163s 00:11:41.306 09:08:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.306 09:08:36 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:41.306 ************************************ 00:11:41.306 END TEST bdev_json_nonenclosed 00:11:41.306 ************************************ 00:11:41.306 09:08:36 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.306 09:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:41.306 09:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.306 09:08:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:41.306 ************************************ 00:11:41.306 START TEST bdev_json_nonarray 00:11:41.306 ************************************ 00:11:41.306 09:08:36 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:41.566 [2024-11-20 09:08:36.426497] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:41.566 [2024-11-20 09:08:36.426738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63656 ] 00:11:41.566 [2024-11-20 09:08:36.608394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.825 [2024-11-20 09:08:36.755198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.825 [2024-11-20 09:08:36.755361] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:41.825 [2024-11-20 09:08:36.755391] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:41.825 [2024-11-20 09:08:36.755406] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:42.084 00:11:42.084 real 0m0.723s 00:11:42.084 user 0m0.460s 00:11:42.084 sys 0m0.157s 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.084 ************************************ 00:11:42.084 END TEST bdev_json_nonarray 00:11:42.084 ************************************ 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:42.084 09:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:11:42.084 09:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:11:42.084 09:08:37 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:42.084 09:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:42.084 09:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.084 09:08:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:42.084 ************************************ 00:11:42.084 START TEST bdev_gpt_uuid 00:11:42.084 ************************************ 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63687 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63687 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63687 ']' 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.084 09:08:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:42.342 [2024-11-20 09:08:37.240082] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:42.342 [2024-11-20 09:08:37.240291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63687 ] 00:11:42.342 [2024-11-20 09:08:37.437861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.599 [2024-11-20 09:08:37.616146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.534 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.534 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:43.534 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:43.534 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.534 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 Some configs were skipped because the RPC state that can call them passed over. 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:11:43.793 { 00:11:43.793 "name": "Nvme1n1p1", 00:11:43.793 "aliases": [ 00:11:43.793 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:43.793 ], 00:11:43.793 "product_name": "GPT Disk", 00:11:43.793 "block_size": 4096, 00:11:43.793 "num_blocks": 655104, 00:11:43.793 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:43.793 "assigned_rate_limits": { 00:11:43.793 "rw_ios_per_sec": 0, 00:11:43.793 "rw_mbytes_per_sec": 0, 00:11:43.793 "r_mbytes_per_sec": 0, 00:11:43.793 "w_mbytes_per_sec": 0 00:11:43.793 }, 00:11:43.793 "claimed": false, 00:11:43.793 "zoned": false, 00:11:43.793 "supported_io_types": { 00:11:43.793 "read": true, 00:11:43.793 "write": true, 00:11:43.793 "unmap": true, 00:11:43.793 "flush": true, 00:11:43.793 "reset": true, 00:11:43.793 "nvme_admin": false, 00:11:43.793 "nvme_io": false, 00:11:43.793 "nvme_io_md": false, 00:11:43.793 "write_zeroes": true, 00:11:43.793 "zcopy": false, 00:11:43.793 "get_zone_info": false, 00:11:43.793 "zone_management": false, 00:11:43.793 "zone_append": false, 00:11:43.793 "compare": true, 00:11:43.793 "compare_and_write": false, 00:11:43.793 "abort": true, 00:11:43.793 "seek_hole": false, 00:11:43.793 "seek_data": false, 00:11:43.793 "copy": true, 00:11:43.793 "nvme_iov_md": false 00:11:43.793 }, 00:11:43.793 "driver_specific": { 00:11:43.793 "gpt": { 00:11:43.793 "base_bdev": "Nvme1n1", 00:11:43.793 "offset_blocks": 256, 00:11:43.793 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:43.793 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:43.793 "partition_name": "SPDK_TEST_first" 00:11:43.793 } 00:11:43.793 } 00:11:43.793 } 00:11:43.793 ]' 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:11:43.793 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.051 09:08:38 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:11:44.051 { 00:11:44.051 "name": "Nvme1n1p2", 00:11:44.051 "aliases": [ 00:11:44.051 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:44.051 ], 00:11:44.051 "product_name": "GPT Disk", 00:11:44.051 "block_size": 4096, 00:11:44.051 "num_blocks": 655103, 00:11:44.051 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:44.051 "assigned_rate_limits": { 00:11:44.051 "rw_ios_per_sec": 0, 00:11:44.051 "rw_mbytes_per_sec": 0, 00:11:44.051 "r_mbytes_per_sec": 0, 00:11:44.051 "w_mbytes_per_sec": 0 00:11:44.051 }, 00:11:44.051 "claimed": false, 00:11:44.051 "zoned": false, 00:11:44.051 "supported_io_types": { 00:11:44.052 "read": true, 00:11:44.052 "write": true, 00:11:44.052 "unmap": true, 00:11:44.052 "flush": true, 00:11:44.052 "reset": true, 00:11:44.052 "nvme_admin": false, 00:11:44.052 "nvme_io": false, 00:11:44.052 "nvme_io_md": false, 00:11:44.052 "write_zeroes": true, 00:11:44.052 "zcopy": false, 00:11:44.052 "get_zone_info": false, 00:11:44.052 "zone_management": false, 00:11:44.052 "zone_append": false, 00:11:44.052 "compare": true, 00:11:44.052 "compare_and_write": false, 00:11:44.052 "abort": true, 00:11:44.052 "seek_hole": false, 00:11:44.052 "seek_data": false, 00:11:44.052 "copy": true, 00:11:44.052 "nvme_iov_md": false 00:11:44.052 }, 00:11:44.052 "driver_specific": { 00:11:44.052 "gpt": { 00:11:44.052 "base_bdev": "Nvme1n1", 00:11:44.052 "offset_blocks": 655360, 00:11:44.052 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:44.052 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:44.052 "partition_name": "SPDK_TEST_second" 00:11:44.052 } 00:11:44.052 } 00:11:44.052 } 00:11:44.052 ]' 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63687 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63687 ']' 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63687 00:11:44.052 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63687 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.310 killing process with pid 63687 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63687' 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63687 00:11:44.310 09:08:39 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63687 00:11:46.316 00:11:46.316 real 0m4.073s 00:11:46.316 user 0m4.331s 00:11:46.316 sys 0m0.614s 00:11:46.316 09:08:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.316 09:08:41 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 ************************************ 00:11:46.316 END TEST bdev_gpt_uuid 00:11:46.316 ************************************ 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:46.316 09:08:41 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:46.575 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.834 Waiting for block devices as requested 00:11:46.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:46.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.093 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:47.093 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.365 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:52.365 09:08:47 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:52.365 09:08:47 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:52.365 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:52.365 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:52.365 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:52.365 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:52.365 09:08:47 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:52.365 00:11:52.365 real 1m6.133s 00:11:52.365 user 1m24.455s 00:11:52.365 sys 0m11.071s 00:11:52.365 09:08:47 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.365 09:08:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:52.365 ************************************ 00:11:52.365 END TEST blockdev_nvme_gpt 00:11:52.365 ************************************ 00:11:52.624 09:08:47 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:52.624 09:08:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.624 09:08:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.624 09:08:47 -- common/autotest_common.sh@10 -- # set +x 00:11:52.624 ************************************ 00:11:52.624 START TEST nvme 00:11:52.624 ************************************ 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:52.624 * Looking for test storage... 00:11:52.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.624 09:08:47 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.624 09:08:47 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.624 09:08:47 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.624 09:08:47 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.624 09:08:47 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.624 09:08:47 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:52.624 09:08:47 nvme -- scripts/common.sh@345 -- # : 1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.624 09:08:47 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.624 09:08:47 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@353 -- # local d=1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.624 09:08:47 nvme -- scripts/common.sh@355 -- # echo 1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.624 09:08:47 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@353 -- # local d=2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.624 09:08:47 nvme -- scripts/common.sh@355 -- # echo 2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.624 09:08:47 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.624 09:08:47 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.624 09:08:47 nvme -- scripts/common.sh@368 -- # return 0 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.624 09:08:47 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.624 --rc genhtml_branch_coverage=1 00:11:52.624 --rc genhtml_function_coverage=1 00:11:52.624 --rc genhtml_legend=1 00:11:52.625 --rc geninfo_all_blocks=1 00:11:52.625 --rc geninfo_unexecuted_blocks=1 00:11:52.625 00:11:52.625 ' 00:11:52.625 09:08:47 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.625 --rc genhtml_branch_coverage=1 00:11:52.625 --rc genhtml_function_coverage=1 00:11:52.625 --rc genhtml_legend=1 00:11:52.625 --rc geninfo_all_blocks=1 00:11:52.625 --rc geninfo_unexecuted_blocks=1 00:11:52.625 00:11:52.625 ' 00:11:52.625 09:08:47 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.625 --rc genhtml_branch_coverage=1 00:11:52.625 --rc genhtml_function_coverage=1 00:11:52.625 --rc genhtml_legend=1 00:11:52.625 --rc geninfo_all_blocks=1 00:11:52.625 --rc geninfo_unexecuted_blocks=1 00:11:52.625 00:11:52.625 ' 00:11:52.625 09:08:47 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.625 --rc genhtml_branch_coverage=1 00:11:52.625 --rc genhtml_function_coverage=1 00:11:52.625 --rc genhtml_legend=1 00:11:52.625 --rc geninfo_all_blocks=1 00:11:52.625 --rc geninfo_unexecuted_blocks=1 00:11:52.625 00:11:52.625 ' 00:11:52.625 09:08:47 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:53.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.758 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.758 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.758 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.758 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:54.016 09:08:48 nvme -- nvme/nvme.sh@79 -- # uname 00:11:54.017 09:08:48 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:54.017 09:08:48 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:54.017 09:08:48 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:54.017 Waiting for stub to ready for secondary processes... 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1075 -- # stubpid=64338 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64338 ]] 00:11:54.017 09:08:48 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:54.017 [2024-11-20 09:08:48.989799] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:54.017 [2024-11-20 09:08:48.989993] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:54.951 09:08:49 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:54.951 09:08:49 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64338 ]] 00:11:54.951 09:08:49 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:55.208 [2024-11-20 09:08:50.312947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.466 [2024-11-20 09:08:50.458986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.466 [2024-11-20 09:08:50.459088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.466 [2024-11-20 09:08:50.459115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.466 [2024-11-20 09:08:50.476735] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:55.466 [2024-11-20 09:08:50.476942] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:55.466 [2024-11-20 09:08:50.490593] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:55.466 [2024-11-20 09:08:50.490871] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:55.466 [2024-11-20 09:08:50.495332] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:55.466 [2024-11-20 09:08:50.495750] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:55.466 [2024-11-20 09:08:50.495892] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:55.466 [2024-11-20 09:08:50.500188] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:55.466 [2024-11-20 09:08:50.500395] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:55.466 [2024-11-20 09:08:50.500478] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:55.466 [2024-11-20 09:08:50.503229] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:55.466 [2024-11-20 09:08:50.503521] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:55.466 [2024-11-20 09:08:50.503607] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:55.466 [2024-11-20 09:08:50.503698] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:55.466 [2024-11-20 09:08:50.503763] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:56.063 09:08:50 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:56.063 done. 00:11:56.063 09:08:50 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:56.063 09:08:50 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:56.063 09:08:50 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:56.063 09:08:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.063 09:08:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.063 ************************************ 00:11:56.063 START TEST nvme_reset 00:11:56.063 ************************************ 00:11:56.063 09:08:50 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:56.322 Initializing NVMe Controllers 00:11:56.322 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:56.322 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:56.322 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:56.322 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:56.322 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:56.322 00:11:56.322 real 0m0.340s 00:11:56.322 user 0m0.118s 00:11:56.322 sys 0m0.161s 00:11:56.322 09:08:51 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.322 09:08:51 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:56.322 ************************************ 00:11:56.322 END TEST nvme_reset 00:11:56.322 ************************************ 00:11:56.322 09:08:51 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:56.322 09:08:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.322 09:08:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.322 09:08:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.322 ************************************ 00:11:56.322 START TEST nvme_identify 00:11:56.322 ************************************ 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:56.322 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:56.322 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:56.322 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:56.322 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:56.322 09:08:51 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:56.322 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:56.580 [2024-11-20 09:08:51.697630] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64371 terminated unexpected 00:11:56.842 ===================================================== 00:11:56.842 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:56.842 ===================================================== 00:11:56.842 Controller Capabilities/Features 00:11:56.842 ================================ 00:11:56.842 Vendor ID: 1b36 00:11:56.842 Subsystem Vendor ID: 1af4 00:11:56.842 Serial Number: 12340 00:11:56.842 Model Number: QEMU NVMe Ctrl 00:11:56.842 Firmware Version: 8.0.0 00:11:56.842 Recommended Arb Burst: 6 00:11:56.842 IEEE OUI Identifier: 00 54 52 00:11:56.842 Multi-path I/O 00:11:56.842 May have multiple subsystem ports: No 00:11:56.842 May have multiple controllers: No 00:11:56.842 Associated with SR-IOV VF: No 00:11:56.842 Max Data Transfer Size: 524288 00:11:56.842 Max Number of Namespaces: 256 00:11:56.842 Max Number of I/O Queues: 64 00:11:56.842 NVMe Specification Version (VS): 1.4 00:11:56.842 NVMe Specification Version (Identify): 1.4 00:11:56.842 Maximum Queue Entries: 2048 00:11:56.842 Contiguous Queues Required: Yes 00:11:56.842 Arbitration Mechanisms Supported 00:11:56.842 Weighted Round Robin: Not Supported 00:11:56.842 Vendor Specific: Not Supported 00:11:56.842 Reset Timeout: 7500 ms 00:11:56.842 Doorbell Stride: 4 bytes 00:11:56.842 NVM Subsystem Reset: Not Supported 00:11:56.842 Command Sets Supported 00:11:56.842 NVM Command Set: Supported 00:11:56.842 Boot Partition: Not Supported 00:11:56.842 Memory Page Size Minimum: 4096 bytes 00:11:56.842 Memory Page Size Maximum: 65536 bytes 00:11:56.842 Persistent Memory Region: Not Supported 00:11:56.842 Optional Asynchronous Events Supported 00:11:56.842 Namespace Attribute Notices: Supported 00:11:56.842 Firmware Activation Notices: Not Supported 00:11:56.842 ANA Change Notices: Not Supported 00:11:56.842 PLE Aggregate Log Change Notices: Not Supported 00:11:56.842 LBA Status Info Alert Notices: Not Supported 00:11:56.842 EGE Aggregate Log Change Notices: Not Supported 00:11:56.842 Normal NVM Subsystem Shutdown event: Not Supported 00:11:56.842 Zone Descriptor Change Notices: Not Supported 00:11:56.842 Discovery Log Change Notices: Not Supported 00:11:56.842 Controller Attributes 00:11:56.842 128-bit Host Identifier: Not Supported 00:11:56.842 Non-Operational Permissive Mode: Not Supported 00:11:56.842 NVM Sets: Not Supported 00:11:56.842 Read Recovery Levels: Not Supported 00:11:56.842 Endurance Groups: Not Supported 00:11:56.842 Predictable Latency Mode: Not Supported 00:11:56.842 Traffic Based Keep ALive: Not Supported 00:11:56.842 Namespace Granularity: Not Supported 00:11:56.842 SQ Associations: Not Supported 00:11:56.842 UUID List: Not Supported 00:11:56.842 Multi-Domain Subsystem: Not Supported 00:11:56.842 Fixed Capacity Management: Not Supported 00:11:56.842 Variable Capacity Management: Not Supported 00:11:56.842 Delete Endurance Group: Not Supported 00:11:56.842 Delete NVM Set: Not Supported 00:11:56.842 Extended LBA Formats Supported: Supported 00:11:56.842 Flexible Data Placement Supported: Not Supported 00:11:56.842 00:11:56.842 Controller Memory Buffer Support 00:11:56.842 ================================ 00:11:56.842 Supported: No 00:11:56.842 00:11:56.842 Persistent Memory Region Support 00:11:56.842 ================================ 00:11:56.842 Supported: No 00:11:56.842 00:11:56.842 Admin Command Set Attributes 00:11:56.842 ============================ 00:11:56.842 Security Send/Receive: Not Supported 00:11:56.842 Format NVM: Supported 00:11:56.842 Firmware Activate/Download: Not Supported 00:11:56.842 Namespace Management: Supported 00:11:56.842 Device Self-Test: Not Supported 00:11:56.842 Directives: Supported 00:11:56.842 NVMe-MI: Not Supported 00:11:56.842 Virtualization Management: Not Supported 00:11:56.842 Doorbell Buffer Config: Supported 00:11:56.842 Get LBA Status Capability: Not Supported 00:11:56.842 Command & Feature Lockdown Capability: Not Supported 00:11:56.842 Abort Command Limit: 4 00:11:56.842 Async Event Request Limit: 4 00:11:56.842 Number of Firmware Slots: N/A 00:11:56.842 Firmware Slot 1 Read-Only: N/A 00:11:56.842 Firmware Activation Without Reset: N/A 00:11:56.842 Multiple Update Detection Support: N/A 00:11:56.842 Firmware Update Granularity: No Information Provided 00:11:56.842 Per-Namespace SMART Log: Yes 00:11:56.842 Asymmetric Namespace Access Log Page: Not Supported 00:11:56.842 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:56.842 Command Effects Log Page: Supported 00:11:56.842 Get Log Page Extended Data: Supported 00:11:56.842 Telemetry Log Pages: Not Supported 00:11:56.842 Persistent Event Log Pages: Not Supported 00:11:56.842 Supported Log Pages Log Page: May Support 00:11:56.842 Commands Supported & Effects Log Page: Not Supported 00:11:56.842 Feature Identifiers & Effects Log Page:May Support 00:11:56.842 NVMe-MI Commands & Effects Log Page: May Support 00:11:56.842 Data Area 4 for Telemetry Log: Not Supported 00:11:56.842 Error Log Page Entries Supported: 1 00:11:56.842 Keep Alive: Not Supported 00:11:56.842 00:11:56.843 NVM Command Set Attributes 00:11:56.843 ========================== 00:11:56.843 Submission Queue Entry Size 00:11:56.843 Max: 64 00:11:56.843 Min: 64 00:11:56.843 Completion Queue Entry Size 00:11:56.843 Max: 16 00:11:56.843 Min: 16 00:11:56.843 Number of Namespaces: 256 00:11:56.843 Compare Command: Supported 00:11:56.843 Write Uncorrectable Command: Not Supported 00:11:56.843 Dataset Management Command: Supported 00:11:56.843 Write Zeroes Command: Supported 00:11:56.843 Set Features Save Field: Supported 00:11:56.843 Reservations: Not Supported 00:11:56.843 Timestamp: Supported 00:11:56.843 Copy: Supported 00:11:56.843 Volatile Write Cache: Present 00:11:56.843 Atomic Write Unit (Normal): 1 00:11:56.843 Atomic Write Unit (PFail): 1 00:11:56.843 Atomic Compare & Write Unit: 1 00:11:56.843 Fused Compare & Write: Not Supported 00:11:56.843 Scatter-Gather List 00:11:56.843 SGL Command Set: Supported 00:11:56.843 SGL Keyed: Not Supported 00:11:56.843 SGL Bit Bucket Descriptor: Not Supported 00:11:56.843 SGL Metadata Pointer: Not Supported 00:11:56.843 Oversized SGL: Not Supported 00:11:56.843 SGL Metadata Address: Not Supported 00:11:56.843 SGL Offset: Not Supported 00:11:56.843 Transport SGL Data Block: Not Supported 00:11:56.843 Replay Protected Memory Block: Not Supported 00:11:56.843 00:11:56.843 Firmware Slot Information 00:11:56.843 ========================= 00:11:56.843 Active slot: 1 00:11:56.843 Slot 1 Firmware Revision: 1.0 00:11:56.843 00:11:56.843 00:11:56.843 Commands Supported and Effects 00:11:56.843 ============================== 00:11:56.843 Admin Commands 00:11:56.843 -------------- 00:11:56.843 Delete I/O Submission Queue (00h): Supported 00:11:56.843 Create I/O Submission Queue (01h): Supported 00:11:56.843 Get Log Page (02h): Supported 00:11:56.843 Delete I/O Completion Queue (04h): Supported 00:11:56.843 Create I/O Completion Queue (05h): Supported 00:11:56.843 Identify (06h): Supported 00:11:56.843 Abort (08h): Supported 00:11:56.843 Set Features (09h): Supported 00:11:56.843 Get Features (0Ah): Supported 00:11:56.843 Asynchronous Event Request (0Ch): Supported 00:11:56.843 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:56.843 Directive Send (19h): Supported 00:11:56.843 Directive Receive (1Ah): Supported 00:11:56.843 Virtualization Management (1Ch): Supported 00:11:56.843 Doorbell Buffer Config (7Ch): Supported 00:11:56.843 Format NVM (80h): Supported LBA-Change 00:11:56.843 I/O Commands 00:11:56.843 ------------ 00:11:56.843 Flush (00h): Supported LBA-Change 00:11:56.843 Write (01h): Supported LBA-Change 00:11:56.843 Read (02h): Supported 00:11:56.843 Compare (05h): Supported 00:11:56.843 Write Zeroes (08h): Supported LBA-Change 00:11:56.843 Dataset Management (09h): Supported LBA-Change 00:11:56.843 Unknown (0Ch): Supported 00:11:56.843 Unknown (12h): Supported 00:11:56.843 Copy (19h): Supported LBA-Change 00:11:56.843 Unknown (1Dh): Supported LBA-Change 00:11:56.843 00:11:56.843 Error Log 00:11:56.843 ========= 00:11:56.843 00:11:56.843 Arbitration 00:11:56.843 =========== 00:11:56.843 Arbitration Burst: no limit 00:11:56.843 00:11:56.843 Power Management 00:11:56.843 ================ 00:11:56.843 Number of Power States: 1 00:11:56.843 Current Power State: Power State #0 00:11:56.843 Power State #0: 00:11:56.843 Max Power: 25.00 W 00:11:56.843 Non-Operational State: Operational 00:11:56.843 Entry Latency: 16 microseconds 00:11:56.843 Exit Latency: 4 microseconds 00:11:56.843 Relative Read Throughput: 0 00:11:56.843 Relative Read Latency: 0 00:11:56.843 Relative Write Throughput: 0 00:11:56.843 Relative Write Latency: 0 00:11:56.843 Idle Power[2024-11-20 09:08:51.699640] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64371 terminated unexpected 00:11:56.843 : Not Reported 00:11:56.843 Active Power: Not Reported 00:11:56.843 Non-Operational Permissive Mode: Not Supported 00:11:56.843 00:11:56.843 Health Information 00:11:56.843 ================== 00:11:56.843 Critical Warnings: 00:11:56.843 Available Spare Space: OK 00:11:56.843 Temperature: OK 00:11:56.843 Device Reliability: OK 00:11:56.843 Read Only: No 00:11:56.843 Volatile Memory Backup: OK 00:11:56.843 Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.843 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:56.843 Available Spare: 0% 00:11:56.843 Available Spare Threshold: 0% 00:11:56.843 Life Percentage Used: 0% 00:11:56.843 Data Units Read: 703 00:11:56.843 Data Units Written: 631 00:11:56.843 Host Read Commands: 32028 00:11:56.843 Host Write Commands: 31814 00:11:56.843 Controller Busy Time: 0 minutes 00:11:56.843 Power Cycles: 0 00:11:56.843 Power On Hours: 0 hours 00:11:56.843 Unsafe Shutdowns: 0 00:11:56.843 Unrecoverable Media Errors: 0 00:11:56.843 Lifetime Error Log Entries: 0 00:11:56.843 Warning Temperature Time: 0 minutes 00:11:56.843 Critical Temperature Time: 0 minutes 00:11:56.843 00:11:56.843 Number of Queues 00:11:56.843 ================ 00:11:56.843 Number of I/O Submission Queues: 64 00:11:56.843 Number of I/O Completion Queues: 64 00:11:56.843 00:11:56.843 ZNS Specific Controller Data 00:11:56.843 ============================ 00:11:56.843 Zone Append Size Limit: 0 00:11:56.843 00:11:56.843 00:11:56.843 Active Namespaces 00:11:56.843 ================= 00:11:56.843 Namespace ID:1 00:11:56.843 Error Recovery Timeout: Unlimited 00:11:56.843 Command Set Identifier: NVM (00h) 00:11:56.843 Deallocate: Supported 00:11:56.843 Deallocated/Unwritten Error: Supported 00:11:56.843 Deallocated Read Value: All 0x00 00:11:56.843 Deallocate in Write Zeroes: Not Supported 00:11:56.843 Deallocated Guard Field: 0xFFFF 00:11:56.843 Flush: Supported 00:11:56.843 Reservation: Not Supported 00:11:56.843 Metadata Transferred as: Separate Metadata Buffer 00:11:56.843 Namespace Sharing Capabilities: Private 00:11:56.843 Size (in LBAs): 1548666 (5GiB) 00:11:56.843 Capacity (in LBAs): 1548666 (5GiB) 00:11:56.843 Utilization (in LBAs): 1548666 (5GiB) 00:11:56.843 Thin Provisioning: Not Supported 00:11:56.843 Per-NS Atomic Units: No 00:11:56.843 Maximum Single Source Range Length: 128 00:11:56.843 Maximum Copy Length: 128 00:11:56.843 Maximum Source Range Count: 128 00:11:56.843 NGUID/EUI64 Never Reused: No 00:11:56.843 Namespace Write Protected: No 00:11:56.843 Number of LBA Formats: 8 00:11:56.843 Current LBA Format: LBA Format #07 00:11:56.843 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.843 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:56.843 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:56.843 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:56.843 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:56.843 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:56.843 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:56.843 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:56.843 00:11:56.843 NVM Specific Namespace Data 00:11:56.843 =========================== 00:11:56.843 Logical Block Storage Tag Mask: 0 00:11:56.843 Protection Information Capabilities: 00:11:56.843 16b Guard Protection Information Storage Tag Support: No 00:11:56.843 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:56.843 Storage Tag Check Read Support: No 00:11:56.843 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.843 ===================================================== 00:11:56.843 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:56.843 ===================================================== 00:11:56.843 Controller Capabilities/Features 00:11:56.843 ================================ 00:11:56.843 Vendor ID: 1b36 00:11:56.843 Subsystem Vendor ID: 1af4 00:11:56.843 Serial Number: 12341 00:11:56.843 Model Number: QEMU NVMe Ctrl 00:11:56.843 Firmware Version: 8.0.0 00:11:56.843 Recommended Arb Burst: 6 00:11:56.843 IEEE OUI Identifier: 00 54 52 00:11:56.843 Multi-path I/O 00:11:56.843 May have multiple subsystem ports: No 00:11:56.843 May have multiple controllers: No 00:11:56.843 Associated with SR-IOV VF: No 00:11:56.843 Max Data Transfer Size: 524288 00:11:56.844 Max Number of Namespaces: 256 00:11:56.844 Max Number of I/O Queues: 64 00:11:56.844 NVMe Specification Version (VS): 1.4 00:11:56.844 NVMe Specification Version (Identify): 1.4 00:11:56.844 Maximum Queue Entries: 2048 00:11:56.844 Contiguous Queues Required: Yes 00:11:56.844 Arbitration Mechanisms Supported 00:11:56.844 Weighted Round Robin: Not Supported 00:11:56.844 Vendor Specific: Not Supported 00:11:56.844 Reset Timeout: 7500 ms 00:11:56.844 Doorbell Stride: 4 bytes 00:11:56.844 NVM Subsystem Reset: Not Supported 00:11:56.844 Command Sets Supported 00:11:56.844 NVM Command Set: Supported 00:11:56.844 Boot Partition: Not Supported 00:11:56.844 Memory Page Size Minimum: 4096 bytes 00:11:56.844 Memory Page Size Maximum: 65536 bytes 00:11:56.844 Persistent Memory Region: Not Supported 00:11:56.844 Optional Asynchronous Events Supported 00:11:56.844 Namespace Attribute Notices: Supported 00:11:56.844 Firmware Activation Notices: Not Supported 00:11:56.844 ANA Change Notices: Not Supported 00:11:56.844 PLE Aggregate Log Change Notices: Not Supported 00:11:56.844 LBA Status Info Alert Notices: Not Supported 00:11:56.844 EGE Aggregate Log Change Notices: Not Supported 00:11:56.844 Normal NVM Subsystem Shutdown event: Not Supported 00:11:56.844 Zone Descriptor Change Notices: Not Supported 00:11:56.844 Discovery Log Change Notices: Not Supported 00:11:56.844 Controller Attributes 00:11:56.844 128-bit Host Identifier: Not Supported 00:11:56.844 Non-Operational Permissive Mode: Not Supported 00:11:56.844 NVM Sets: Not Supported 00:11:56.844 Read Recovery Levels: Not Supported 00:11:56.844 Endurance Groups: Not Supported 00:11:56.844 Predictable Latency Mode: Not Supported 00:11:56.844 Traffic Based Keep ALive: Not Supported 00:11:56.844 Namespace Granularity: Not Supported 00:11:56.844 SQ Associations: Not Supported 00:11:56.844 UUID List: Not Supported 00:11:56.844 Multi-Domain Subsystem: Not Supported 00:11:56.844 Fixed Capacity Management: Not Supported 00:11:56.844 Variable Capacity Management: Not Supported 00:11:56.844 Delete Endurance Group: Not Supported 00:11:56.844 Delete NVM Set: Not Supported 00:11:56.844 Extended LBA Formats Supported: Supported 00:11:56.844 Flexible Data Placement Supported: Not Supported 00:11:56.844 00:11:56.844 Controller Memory Buffer Support 00:11:56.844 ================================ 00:11:56.844 Supported: No 00:11:56.844 00:11:56.844 Persistent Memory Region Support 00:11:56.844 ================================ 00:11:56.844 Supported: No 00:11:56.844 00:11:56.844 Admin Command Set Attributes 00:11:56.844 ============================ 00:11:56.844 Security Send/Receive: Not Supported 00:11:56.844 Format NVM: Supported 00:11:56.844 Firmware Activate/Download: Not Supported 00:11:56.844 Namespace Management: Supported 00:11:56.844 Device Self-Test: Not Supported 00:11:56.844 Directives: Supported 00:11:56.844 NVMe-MI: Not Supported 00:11:56.844 Virtualization Management: Not Supported 00:11:56.844 Doorbell Buffer Config: Supported 00:11:56.844 Get LBA Status Capability: Not Supported 00:11:56.844 Command & Feature Lockdown Capability: Not Supported 00:11:56.844 Abort Command Limit: 4 00:11:56.844 Async Event Request Limit: 4 00:11:56.844 Number of Firmware Slots: N/A 00:11:56.844 Firmware Slot 1 Read-Only: N/A 00:11:56.844 Firmware Activation Without Reset: N/A 00:11:56.844 Multiple Update Detection Support: N/A 00:11:56.844 Firmware Update Granularity: No Information Provided 00:11:56.844 Per-Namespace SMART Log: Yes 00:11:56.844 Asymmetric Namespace Access Log Page: Not Supported 00:11:56.844 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:56.844 Command Effects Log Page: Supported 00:11:56.844 Get Log Page Extended Data: Supported 00:11:56.844 Telemetry Log Pages: Not Supported 00:11:56.844 Persistent Event Log Pages: Not Supported 00:11:56.844 Supported Log Pages Log Page: May Support 00:11:56.844 Commands Supported & Effects Log Page: Not Supported 00:11:56.844 Feature Identifiers & Effects Log Page:May Support 00:11:56.844 NVMe-MI Commands & Effects Log Page: May Support 00:11:56.844 Data Area 4 for Telemetry Log: Not Supported 00:11:56.844 Error Log Page Entries Supported: 1 00:11:56.844 Keep Alive: Not Supported 00:11:56.844 00:11:56.844 NVM Command Set Attributes 00:11:56.844 ========================== 00:11:56.844 Submission Queue Entry Size 00:11:56.844 Max: 64 00:11:56.844 Min: 64 00:11:56.844 Completion Queue Entry Size 00:11:56.844 Max: 16 00:11:56.844 Min: 16 00:11:56.844 Number of Namespaces: 256 00:11:56.844 Compare Command: Supported 00:11:56.844 Write Uncorrectable Command: Not Supported 00:11:56.844 Dataset Management Command: Supported 00:11:56.844 Write Zeroes Command: Supported 00:11:56.844 Set Features Save Field: Supported 00:11:56.844 Reservations: Not Supported 00:11:56.844 Timestamp: Supported 00:11:56.844 Copy: Supported 00:11:56.844 Volatile Write Cache: Present 00:11:56.844 Atomic Write Unit (Normal): 1 00:11:56.844 Atomic Write Unit (PFail): 1 00:11:56.844 Atomic Compare & Write Unit: 1 00:11:56.844 Fused Compare & Write: Not Supported 00:11:56.844 Scatter-Gather List 00:11:56.844 SGL Command Set: Supported 00:11:56.844 SGL Keyed: Not Supported 00:11:56.844 SGL Bit Bucket Descriptor: Not Supported 00:11:56.844 SGL Metadata Pointer: Not Supported 00:11:56.844 Oversized SGL: Not Supported 00:11:56.844 SGL Metadata Address: Not Supported 00:11:56.844 SGL Offset: Not Supported 00:11:56.844 Transport SGL Data Block: Not Supported 00:11:56.844 Replay Protected Memory Block: Not Supported 00:11:56.844 00:11:56.844 Firmware Slot Information 00:11:56.844 ========================= 00:11:56.844 Active slot: 1 00:11:56.844 Slot 1 Firmware Revision: 1.0 00:11:56.844 00:11:56.844 00:11:56.844 Commands Supported and Effects 00:11:56.844 ============================== 00:11:56.844 Admin Commands 00:11:56.844 -------------- 00:11:56.844 Delete I/O Submission Queue (00h): Supported 00:11:56.844 Create I/O Submission Queue (01h): Supported 00:11:56.844 Get Log Page (02h): Supported 00:11:56.844 Delete I/O Completion Queue (04h): Supported 00:11:56.844 Create I/O Completion Queue (05h): Supported 00:11:56.844 Identify (06h): Supported 00:11:56.844 Abort (08h): Supported 00:11:56.844 Set Features (09h): Supported 00:11:56.844 Get Features (0Ah): Supported 00:11:56.844 Asynchronous Event Request (0Ch): Supported 00:11:56.844 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:56.844 Directive Send (19h): Supported 00:11:56.844 Directive Receive (1Ah): Supported 00:11:56.844 Virtualization Management (1Ch): Supported 00:11:56.844 Doorbell Buffer Config (7Ch): Supported 00:11:56.844 Format NVM (80h): Supported LBA-Change 00:11:56.844 I/O Commands 00:11:56.844 ------------ 00:11:56.844 Flush (00h): Supported LBA-Change 00:11:56.844 Write (01h): Supported LBA-Change 00:11:56.844 Read (02h): Supported 00:11:56.844 Compare (05h): Supported 00:11:56.844 Write Zeroes (08h): Supported LBA-Change 00:11:56.844 Dataset Management (09h): Supported LBA-Change 00:11:56.844 Unknown (0Ch): Supported 00:11:56.844 Unknown (12h): Supported 00:11:56.844 Copy (19h): Supported LBA-Change 00:11:56.844 Unknown (1Dh): Supported LBA-Change 00:11:56.844 00:11:56.844 Error Log 00:11:56.844 ========= 00:11:56.844 00:11:56.844 Arbitration 00:11:56.844 =========== 00:11:56.844 Arbitration Burst: no limit 00:11:56.844 00:11:56.844 Power Management 00:11:56.844 ================ 00:11:56.844 Number of Power States: 1 00:11:56.844 Current Power State: Power State #0 00:11:56.844 Power State #0: 00:11:56.844 Max Power: 25.00 W 00:11:56.844 Non-Operational State: Operational 00:11:56.844 Entry Latency: 16 microseconds 00:11:56.844 Exit Latency: 4 microseconds 00:11:56.844 Relative Read Throughput: 0 00:11:56.844 Relative Read Latency: 0 00:11:56.844 Relative Write Throughput: 0 00:11:56.844 Relative Write Latency: 0 00:11:56.844 Idle Power: Not Reported 00:11:56.844 Active Power: Not Reported 00:11:56.844 Non-Operational Permissive Mode: Not Supported 00:11:56.844 00:11:56.844 Health Information 00:11:56.844 ================== 00:11:56.844 Critical Warnings: 00:11:56.844 Available Spare Space: OK 00:11:56.844 Temperature: [2024-11-20 09:08:51.700757] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64371 terminated unexpected 00:11:56.844 OK 00:11:56.844 Device Reliability: OK 00:11:56.844 Read Only: No 00:11:56.844 Volatile Memory Backup: OK 00:11:56.844 Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.844 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:56.844 Available Spare: 0% 00:11:56.844 Available Spare Threshold: 0% 00:11:56.844 Life Percentage Used: 0% 00:11:56.844 Data Units Read: 1092 00:11:56.845 Data Units Written: 959 00:11:56.845 Host Read Commands: 47835 00:11:56.845 Host Write Commands: 46623 00:11:56.845 Controller Busy Time: 0 minutes 00:11:56.845 Power Cycles: 0 00:11:56.845 Power On Hours: 0 hours 00:11:56.845 Unsafe Shutdowns: 0 00:11:56.845 Unrecoverable Media Errors: 0 00:11:56.845 Lifetime Error Log Entries: 0 00:11:56.845 Warning Temperature Time: 0 minutes 00:11:56.845 Critical Temperature Time: 0 minutes 00:11:56.845 00:11:56.845 Number of Queues 00:11:56.845 ================ 00:11:56.845 Number of I/O Submission Queues: 64 00:11:56.845 Number of I/O Completion Queues: 64 00:11:56.845 00:11:56.845 ZNS Specific Controller Data 00:11:56.845 ============================ 00:11:56.845 Zone Append Size Limit: 0 00:11:56.845 00:11:56.845 00:11:56.845 Active Namespaces 00:11:56.845 ================= 00:11:56.845 Namespace ID:1 00:11:56.845 Error Recovery Timeout: Unlimited 00:11:56.845 Command Set Identifier: NVM (00h) 00:11:56.845 Deallocate: Supported 00:11:56.845 Deallocated/Unwritten Error: Supported 00:11:56.845 Deallocated Read Value: All 0x00 00:11:56.845 Deallocate in Write Zeroes: Not Supported 00:11:56.845 Deallocated Guard Field: 0xFFFF 00:11:56.845 Flush: Supported 00:11:56.845 Reservation: Not Supported 00:11:56.845 Namespace Sharing Capabilities: Private 00:11:56.845 Size (in LBAs): 1310720 (5GiB) 00:11:56.845 Capacity (in LBAs): 1310720 (5GiB) 00:11:56.845 Utilization (in LBAs): 1310720 (5GiB) 00:11:56.845 Thin Provisioning: Not Supported 00:11:56.845 Per-NS Atomic Units: No 00:11:56.845 Maximum Single Source Range Length: 128 00:11:56.845 Maximum Copy Length: 128 00:11:56.845 Maximum Source Range Count: 128 00:11:56.845 NGUID/EUI64 Never Reused: No 00:11:56.845 Namespace Write Protected: No 00:11:56.845 Number of LBA Formats: 8 00:11:56.845 Current LBA Format: LBA Format #04 00:11:56.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.845 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:56.845 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:56.845 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:56.845 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:56.845 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:56.845 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:56.845 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:56.845 00:11:56.845 NVM Specific Namespace Data 00:11:56.845 =========================== 00:11:56.845 Logical Block Storage Tag Mask: 0 00:11:56.845 Protection Information Capabilities: 00:11:56.845 16b Guard Protection Information Storage Tag Support: No 00:11:56.845 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:56.845 Storage Tag Check Read Support: No 00:11:56.845 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.845 ===================================================== 00:11:56.845 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:56.845 ===================================================== 00:11:56.845 Controller Capabilities/Features 00:11:56.845 ================================ 00:11:56.845 Vendor ID: 1b36 00:11:56.845 Subsystem Vendor ID: 1af4 00:11:56.845 Serial Number: 12343 00:11:56.845 Model Number: QEMU NVMe Ctrl 00:11:56.845 Firmware Version: 8.0.0 00:11:56.845 Recommended Arb Burst: 6 00:11:56.845 IEEE OUI Identifier: 00 54 52 00:11:56.845 Multi-path I/O 00:11:56.845 May have multiple subsystem ports: No 00:11:56.845 May have multiple controllers: Yes 00:11:56.845 Associated with SR-IOV VF: No 00:11:56.845 Max Data Transfer Size: 524288 00:11:56.845 Max Number of Namespaces: 256 00:11:56.845 Max Number of I/O Queues: 64 00:11:56.845 NVMe Specification Version (VS): 1.4 00:11:56.845 NVMe Specification Version (Identify): 1.4 00:11:56.845 Maximum Queue Entries: 2048 00:11:56.845 Contiguous Queues Required: Yes 00:11:56.845 Arbitration Mechanisms Supported 00:11:56.845 Weighted Round Robin: Not Supported 00:11:56.845 Vendor Specific: Not Supported 00:11:56.845 Reset Timeout: 7500 ms 00:11:56.845 Doorbell Stride: 4 bytes 00:11:56.845 NVM Subsystem Reset: Not Supported 00:11:56.845 Command Sets Supported 00:11:56.845 NVM Command Set: Supported 00:11:56.845 Boot Partition: Not Supported 00:11:56.845 Memory Page Size Minimum: 4096 bytes 00:11:56.845 Memory Page Size Maximum: 65536 bytes 00:11:56.845 Persistent Memory Region: Not Supported 00:11:56.845 Optional Asynchronous Events Supported 00:11:56.845 Namespace Attribute Notices: Supported 00:11:56.845 Firmware Activation Notices: Not Supported 00:11:56.845 ANA Change Notices: Not Supported 00:11:56.845 PLE Aggregate Log Change Notices: Not Supported 00:11:56.845 LBA Status Info Alert Notices: Not Supported 00:11:56.845 EGE Aggregate Log Change Notices: Not Supported 00:11:56.845 Normal NVM Subsystem Shutdown event: Not Supported 00:11:56.845 Zone Descriptor Change Notices: Not Supported 00:11:56.845 Discovery Log Change Notices: Not Supported 00:11:56.845 Controller Attributes 00:11:56.845 128-bit Host Identifier: Not Supported 00:11:56.845 Non-Operational Permissive Mode: Not Supported 00:11:56.845 NVM Sets: Not Supported 00:11:56.845 Read Recovery Levels: Not Supported 00:11:56.845 Endurance Groups: Supported 00:11:56.845 Predictable Latency Mode: Not Supported 00:11:56.845 Traffic Based Keep ALive: Not Supported 00:11:56.845 Namespace Granularity: Not Supported 00:11:56.845 SQ Associations: Not Supported 00:11:56.845 UUID List: Not Supported 00:11:56.845 Multi-Domain Subsystem: Not Supported 00:11:56.845 Fixed Capacity Management: Not Supported 00:11:56.845 Variable Capacity Management: Not Supported 00:11:56.845 Delete Endurance Group: Not Supported 00:11:56.845 Delete NVM Set: Not Supported 00:11:56.845 Extended LBA Formats Supported: Supported 00:11:56.845 Flexible Data Placement Supported: Supported 00:11:56.845 00:11:56.845 Controller Memory Buffer Support 00:11:56.845 ================================ 00:11:56.845 Supported: No 00:11:56.845 00:11:56.845 Persistent Memory Region Support 00:11:56.845 ================================ 00:11:56.845 Supported: No 00:11:56.845 00:11:56.845 Admin Command Set Attributes 00:11:56.845 ============================ 00:11:56.845 Security Send/Receive: Not Supported 00:11:56.845 Format NVM: Supported 00:11:56.845 Firmware Activate/Download: Not Supported 00:11:56.846 Namespace Management: Supported 00:11:56.846 Device Self-Test: Not Supported 00:11:56.846 Directives: Supported 00:11:56.846 NVMe-MI: Not Supported 00:11:56.846 Virtualization Management: Not Supported 00:11:56.846 Doorbell Buffer Config: Supported 00:11:56.846 Get LBA Status Capability: Not Supported 00:11:56.846 Command & Feature Lockdown Capability: Not Supported 00:11:56.846 Abort Command Limit: 4 00:11:56.846 Async Event Request Limit: 4 00:11:56.846 Number of Firmware Slots: N/A 00:11:56.846 Firmware Slot 1 Read-Only: N/A 00:11:56.846 Firmware Activation Without Reset: N/A 00:11:56.846 Multiple Update Detection Support: N/A 00:11:56.846 Firmware Update Granularity: No Information Provided 00:11:56.846 Per-Namespace SMART Log: Yes 00:11:56.846 Asymmetric Namespace Access Log Page: Not Supported 00:11:56.846 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:56.846 Command Effects Log Page: Supported 00:11:56.846 Get Log Page Extended Data: Supported 00:11:56.846 Telemetry Log Pages: Not Supported 00:11:56.846 Persistent Event Log Pages: Not Supported 00:11:56.846 Supported Log Pages Log Page: May Support 00:11:56.846 Commands Supported & Effects Log Page: Not Supported 00:11:56.846 Feature Identifiers & Effects Log Page:May Support 00:11:56.846 NVMe-MI Commands & Effects Log Page: May Support 00:11:56.846 Data Area 4 for Telemetry Log: Not Supported 00:11:56.846 Error Log Page Entries Supported: 1 00:11:56.846 Keep Alive: Not Supported 00:11:56.846 00:11:56.846 NVM Command Set Attributes 00:11:56.846 ========================== 00:11:56.846 Submission Queue Entry Size 00:11:56.846 Max: 64 00:11:56.846 Min: 64 00:11:56.846 Completion Queue Entry Size 00:11:56.846 Max: 16 00:11:56.846 Min: 16 00:11:56.846 Number of Namespaces: 256 00:11:56.846 Compare Command: Supported 00:11:56.846 Write Uncorrectable Command: Not Supported 00:11:56.846 Dataset Management Command: Supported 00:11:56.846 Write Zeroes Command: Supported 00:11:56.846 Set Features Save Field: Supported 00:11:56.846 Reservations: Not Supported 00:11:56.846 Timestamp: Supported 00:11:56.846 Copy: Supported 00:11:56.846 Volatile Write Cache: Present 00:11:56.846 Atomic Write Unit (Normal): 1 00:11:56.846 Atomic Write Unit (PFail): 1 00:11:56.846 Atomic Compare & Write Unit: 1 00:11:56.846 Fused Compare & Write: Not Supported 00:11:56.846 Scatter-Gather List 00:11:56.846 SGL Command Set: Supported 00:11:56.846 SGL Keyed: Not Supported 00:11:56.846 SGL Bit Bucket Descriptor: Not Supported 00:11:56.846 SGL Metadata Pointer: Not Supported 00:11:56.846 Oversized SGL: Not Supported 00:11:56.846 SGL Metadata Address: Not Supported 00:11:56.846 SGL Offset: Not Supported 00:11:56.846 Transport SGL Data Block: Not Supported 00:11:56.846 Replay Protected Memory Block: Not Supported 00:11:56.846 00:11:56.846 Firmware Slot Information 00:11:56.846 ========================= 00:11:56.846 Active slot: 1 00:11:56.846 Slot 1 Firmware Revision: 1.0 00:11:56.846 00:11:56.846 00:11:56.846 Commands Supported and Effects 00:11:56.846 ============================== 00:11:56.846 Admin Commands 00:11:56.846 -------------- 00:11:56.846 Delete I/O Submission Queue (00h): Supported 00:11:56.846 Create I/O Submission Queue (01h): Supported 00:11:56.846 Get Log Page (02h): Supported 00:11:56.846 Delete I/O Completion Queue (04h): Supported 00:11:56.846 Create I/O Completion Queue (05h): Supported 00:11:56.846 Identify (06h): Supported 00:11:56.846 Abort (08h): Supported 00:11:56.846 Set Features (09h): Supported 00:11:56.846 Get Features (0Ah): Supported 00:11:56.846 Asynchronous Event Request (0Ch): Supported 00:11:56.846 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:56.846 Directive Send (19h): Supported 00:11:56.846 Directive Receive (1Ah): Supported 00:11:56.846 Virtualization Management (1Ch): Supported 00:11:56.846 Doorbell Buffer Config (7Ch): Supported 00:11:56.846 Format NVM (80h): Supported LBA-Change 00:11:56.846 I/O Commands 00:11:56.846 ------------ 00:11:56.846 Flush (00h): Supported LBA-Change 00:11:56.846 Write (01h): Supported LBA-Change 00:11:56.846 Read (02h): Supported 00:11:56.846 Compare (05h): Supported 00:11:56.846 Write Zeroes (08h): Supported LBA-Change 00:11:56.846 Dataset Management (09h): Supported LBA-Change 00:11:56.846 Unknown (0Ch): Supported 00:11:56.846 Unknown (12h): Supported 00:11:56.846 Copy (19h): Supported LBA-Change 00:11:56.846 Unknown (1Dh): Supported LBA-Change 00:11:56.846 00:11:56.846 Error Log 00:11:56.846 ========= 00:11:56.846 00:11:56.846 Arbitration 00:11:56.846 =========== 00:11:56.846 Arbitration Burst: no limit 00:11:56.846 00:11:56.846 Power Management 00:11:56.846 ================ 00:11:56.846 Number of Power States: 1 00:11:56.846 Current Power State: Power State #0 00:11:56.846 Power State #0: 00:11:56.846 Max Power: 25.00 W 00:11:56.846 Non-Operational State: Operational 00:11:56.846 Entry Latency: 16 microseconds 00:11:56.846 Exit Latency: 4 microseconds 00:11:56.846 Relative Read Throughput: 0 00:11:56.846 Relative Read Latency: 0 00:11:56.846 Relative Write Throughput: 0 00:11:56.846 Relative Write Latency: 0 00:11:56.846 Idle Power: Not Reported 00:11:56.846 Active Power: Not Reported 00:11:56.846 Non-Operational Permissive Mode: Not Supported 00:11:56.846 00:11:56.846 Health Information 00:11:56.846 ================== 00:11:56.846 Critical Warnings: 00:11:56.846 Available Spare Space: OK 00:11:56.846 Temperature: OK 00:11:56.846 Device Reliability: OK 00:11:56.846 Read Only: No 00:11:56.846 Volatile Memory Backup: OK 00:11:56.846 Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.846 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:56.846 Available Spare: 0% 00:11:56.846 Available Spare Threshold: 0% 00:11:56.846 Life Percentage Used: 0% 00:11:56.846 Data Units Read: 797 00:11:56.846 Data Units Written: 726 00:11:56.846 Host Read Commands: 33003 00:11:56.846 Host Write Commands: 32426 00:11:56.846 Controller Busy Time: 0 minutes 00:11:56.846 Power Cycles: 0 00:11:56.846 Power On Hours: 0 hours 00:11:56.846 Unsafe Shutdowns: 0 00:11:56.846 Unrecoverable Media Errors: 0 00:11:56.846 Lifetime Error Log Entries: 0 00:11:56.846 Warning Temperature Time: 0 minutes 00:11:56.846 Critical Temperature Time: 0 minutes 00:11:56.846 00:11:56.846 Number of Queues 00:11:56.846 ================ 00:11:56.846 Number of I/O Submission Queues: 64 00:11:56.846 Number of I/O Completion Queues: 64 00:11:56.846 00:11:56.846 ZNS Specific Controller Data 00:11:56.846 ============================ 00:11:56.846 Zone Append Size Limit: 0 00:11:56.846 00:11:56.846 00:11:56.846 Active Namespaces 00:11:56.846 ================= 00:11:56.846 Namespace ID:1 00:11:56.846 Error Recovery Timeout: Unlimited 00:11:56.846 Command Set Identifier: NVM (00h) 00:11:56.846 Deallocate: Supported 00:11:56.846 Deallocated/Unwritten Error: Supported 00:11:56.846 Deallocated Read Value: All 0x00 00:11:56.846 Deallocate in Write Zeroes: Not Supported 00:11:56.846 Deallocated Guard Field: 0xFFFF 00:11:56.846 Flush: Supported 00:11:56.846 Reservation: Not Supported 00:11:56.846 Namespace Sharing Capabilities: Multiple Controllers 00:11:56.846 Size (in LBAs): 262144 (1GiB) 00:11:56.846 Capacity (in LBAs): 262144 (1GiB) 00:11:56.846 Utilization (in LBAs): 262144 (1GiB) 00:11:56.846 Thin Provisioning: Not Supported 00:11:56.846 Per-NS Atomic Units: No 00:11:56.846 Maximum Single Source Range Length: 128 00:11:56.846 Maximum Copy Length: 128 00:11:56.846 Maximum Source Range Count: 128 00:11:56.846 NGUID/EUI64 Never Reused: No 00:11:56.846 Namespace Write Protected: No 00:11:56.846 Endurance group ID: 1 00:11:56.846 Number of LBA Formats: 8 00:11:56.846 Current LBA Format: LBA Format #04 00:11:56.847 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.847 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:56.847 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:56.847 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:56.847 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:56.847 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:56.847 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:56.847 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:56.847 00:11:56.847 Get Feature FDP: 00:11:56.847 ================ 00:11:56.847 Enabled: Yes 00:11:56.847 FDP configuration index: 0 00:11:56.847 00:11:56.847 FDP configurations log page 00:11:56.847 =========================== 00:11:56.847 Number of FDP configurations: 1 00:11:56.847 Version: 0 00:11:56.847 Size: 112 00:11:56.847 FDP Configuration Descriptor: 0 00:11:56.847 Descriptor Size: 96 00:11:56.847 Reclaim Group Identifier format: 2 00:11:56.847 FDP Volatile Write Cache: Not Present 00:11:56.847 FDP Configuration: Valid 00:11:56.847 Vendor Specific Size: 0 00:11:56.847 Number of Reclaim Groups: 2 00:11:56.847 Number of Recalim Unit Handles: 8 00:11:56.847 Max Placement Identifiers: 128 00:11:56.847 Number of Namespaces Suppprted: 256 00:11:56.847 Reclaim unit Nominal Size: 6000000 bytes 00:11:56.847 Estimated Reclaim Unit Time Limit: Not Reported 00:11:56.847 RUH Desc #000: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #001: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #002: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #003: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #004: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #005: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #006: RUH Type: Initially Isolated 00:11:56.847 RUH Desc #007: RUH Type: Initially Isolated 00:11:56.847 00:11:56.847 FDP reclaim unit handle usage log page 00:11:56.847 ====================================== 00:11:56.847 Number of Reclaim Unit Handles: 8 00:11:56.847 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:56.847 RUH Usage Desc #001: RUH Attributes: Unused 00:11:56.847 RUH Usage Desc #002: RUH Attributes: Unused 00:11:56.847 RUH Usage Desc #003: RUH Attributes: Unused 00:11:56.847 RUH Usage Desc #004: RUH Attributes: Unused 00:11:56.847 RUH Usage Desc #005: RUH Attributes: Unused 00:11:56.847 RUH Usage Desc #006: RUH Attributes: Unused 00:11:56.847 RUH Usage Desc #007: RUH Attributes: Unused 00:11:56.847 00:11:56.847 FDP statistics log page 00:11:56.847 ======================= 00:11:56.847 Host bytes with metadata written: 459513856 00:11:56.847 Medi[2024-11-20 09:08:51.702827] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64371 terminated unexpected 00:11:56.847 a bytes with metadata written: 459579392 00:11:56.847 Media bytes erased: 0 00:11:56.847 00:11:56.847 FDP events log page 00:11:56.847 =================== 00:11:56.847 Number of FDP events: 0 00:11:56.847 00:11:56.847 NVM Specific Namespace Data 00:11:56.847 =========================== 00:11:56.847 Logical Block Storage Tag Mask: 0 00:11:56.847 Protection Information Capabilities: 00:11:56.847 16b Guard Protection Information Storage Tag Support: No 00:11:56.847 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:56.847 Storage Tag Check Read Support: No 00:11:56.847 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.847 ===================================================== 00:11:56.847 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:56.847 ===================================================== 00:11:56.847 Controller Capabilities/Features 00:11:56.847 ================================ 00:11:56.847 Vendor ID: 1b36 00:11:56.847 Subsystem Vendor ID: 1af4 00:11:56.847 Serial Number: 12342 00:11:56.847 Model Number: QEMU NVMe Ctrl 00:11:56.847 Firmware Version: 8.0.0 00:11:56.847 Recommended Arb Burst: 6 00:11:56.847 IEEE OUI Identifier: 00 54 52 00:11:56.847 Multi-path I/O 00:11:56.847 May have multiple subsystem ports: No 00:11:56.847 May have multiple controllers: No 00:11:56.847 Associated with SR-IOV VF: No 00:11:56.847 Max Data Transfer Size: 524288 00:11:56.847 Max Number of Namespaces: 256 00:11:56.847 Max Number of I/O Queues: 64 00:11:56.847 NVMe Specification Version (VS): 1.4 00:11:56.847 NVMe Specification Version (Identify): 1.4 00:11:56.847 Maximum Queue Entries: 2048 00:11:56.847 Contiguous Queues Required: Yes 00:11:56.847 Arbitration Mechanisms Supported 00:11:56.847 Weighted Round Robin: Not Supported 00:11:56.847 Vendor Specific: Not Supported 00:11:56.847 Reset Timeout: 7500 ms 00:11:56.847 Doorbell Stride: 4 bytes 00:11:56.847 NVM Subsystem Reset: Not Supported 00:11:56.847 Command Sets Supported 00:11:56.847 NVM Command Set: Supported 00:11:56.847 Boot Partition: Not Supported 00:11:56.847 Memory Page Size Minimum: 4096 bytes 00:11:56.847 Memory Page Size Maximum: 65536 bytes 00:11:56.847 Persistent Memory Region: Not Supported 00:11:56.847 Optional Asynchronous Events Supported 00:11:56.847 Namespace Attribute Notices: Supported 00:11:56.847 Firmware Activation Notices: Not Supported 00:11:56.847 ANA Change Notices: Not Supported 00:11:56.847 PLE Aggregate Log Change Notices: Not Supported 00:11:56.847 LBA Status Info Alert Notices: Not Supported 00:11:56.847 EGE Aggregate Log Change Notices: Not Supported 00:11:56.847 Normal NVM Subsystem Shutdown event: Not Supported 00:11:56.847 Zone Descriptor Change Notices: Not Supported 00:11:56.847 Discovery Log Change Notices: Not Supported 00:11:56.847 Controller Attributes 00:11:56.847 128-bit Host Identifier: Not Supported 00:11:56.847 Non-Operational Permissive Mode: Not Supported 00:11:56.847 NVM Sets: Not Supported 00:11:56.847 Read Recovery Levels: Not Supported 00:11:56.847 Endurance Groups: Not Supported 00:11:56.847 Predictable Latency Mode: Not Supported 00:11:56.847 Traffic Based Keep ALive: Not Supported 00:11:56.847 Namespace Granularity: Not Supported 00:11:56.847 SQ Associations: Not Supported 00:11:56.847 UUID List: Not Supported 00:11:56.847 Multi-Domain Subsystem: Not Supported 00:11:56.847 Fixed Capacity Management: Not Supported 00:11:56.847 Variable Capacity Management: Not Supported 00:11:56.847 Delete Endurance Group: Not Supported 00:11:56.847 Delete NVM Set: Not Supported 00:11:56.847 Extended LBA Formats Supported: Supported 00:11:56.848 Flexible Data Placement Supported: Not Supported 00:11:56.848 00:11:56.848 Controller Memory Buffer Support 00:11:56.848 ================================ 00:11:56.848 Supported: No 00:11:56.848 00:11:56.848 Persistent Memory Region Support 00:11:56.848 ================================ 00:11:56.848 Supported: No 00:11:56.848 00:11:56.848 Admin Command Set Attributes 00:11:56.848 ============================ 00:11:56.848 Security Send/Receive: Not Supported 00:11:56.848 Format NVM: Supported 00:11:56.848 Firmware Activate/Download: Not Supported 00:11:56.848 Namespace Management: Supported 00:11:56.848 Device Self-Test: Not Supported 00:11:56.848 Directives: Supported 00:11:56.848 NVMe-MI: Not Supported 00:11:56.848 Virtualization Management: Not Supported 00:11:56.848 Doorbell Buffer Config: Supported 00:11:56.848 Get LBA Status Capability: Not Supported 00:11:56.848 Command & Feature Lockdown Capability: Not Supported 00:11:56.848 Abort Command Limit: 4 00:11:56.848 Async Event Request Limit: 4 00:11:56.848 Number of Firmware Slots: N/A 00:11:56.848 Firmware Slot 1 Read-Only: N/A 00:11:56.848 Firmware Activation Without Reset: N/A 00:11:56.848 Multiple Update Detection Support: N/A 00:11:56.848 Firmware Update Granularity: No Information Provided 00:11:56.848 Per-Namespace SMART Log: Yes 00:11:56.848 Asymmetric Namespace Access Log Page: Not Supported 00:11:56.848 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:56.848 Command Effects Log Page: Supported 00:11:56.848 Get Log Page Extended Data: Supported 00:11:56.848 Telemetry Log Pages: Not Supported 00:11:56.848 Persistent Event Log Pages: Not Supported 00:11:56.848 Supported Log Pages Log Page: May Support 00:11:56.848 Commands Supported & Effects Log Page: Not Supported 00:11:56.848 Feature Identifiers & Effects Log Page:May Support 00:11:56.848 NVMe-MI Commands & Effects Log Page: May Support 00:11:56.848 Data Area 4 for Telemetry Log: Not Supported 00:11:56.848 Error Log Page Entries Supported: 1 00:11:56.848 Keep Alive: Not Supported 00:11:56.848 00:11:56.848 NVM Command Set Attributes 00:11:56.848 ========================== 00:11:56.848 Submission Queue Entry Size 00:11:56.848 Max: 64 00:11:56.848 Min: 64 00:11:56.848 Completion Queue Entry Size 00:11:56.848 Max: 16 00:11:56.848 Min: 16 00:11:56.848 Number of Namespaces: 256 00:11:56.848 Compare Command: Supported 00:11:56.848 Write Uncorrectable Command: Not Supported 00:11:56.848 Dataset Management Command: Supported 00:11:56.848 Write Zeroes Command: Supported 00:11:56.848 Set Features Save Field: Supported 00:11:56.848 Reservations: Not Supported 00:11:56.848 Timestamp: Supported 00:11:56.848 Copy: Supported 00:11:56.848 Volatile Write Cache: Present 00:11:56.848 Atomic Write Unit (Normal): 1 00:11:56.848 Atomic Write Unit (PFail): 1 00:11:56.848 Atomic Compare & Write Unit: 1 00:11:56.848 Fused Compare & Write: Not Supported 00:11:56.848 Scatter-Gather List 00:11:56.848 SGL Command Set: Supported 00:11:56.848 SGL Keyed: Not Supported 00:11:56.848 SGL Bit Bucket Descriptor: Not Supported 00:11:56.848 SGL Metadata Pointer: Not Supported 00:11:56.848 Oversized SGL: Not Supported 00:11:56.848 SGL Metadata Address: Not Supported 00:11:56.848 SGL Offset: Not Supported 00:11:56.848 Transport SGL Data Block: Not Supported 00:11:56.848 Replay Protected Memory Block: Not Supported 00:11:56.848 00:11:56.848 Firmware Slot Information 00:11:56.848 ========================= 00:11:56.848 Active slot: 1 00:11:56.848 Slot 1 Firmware Revision: 1.0 00:11:56.848 00:11:56.848 00:11:56.848 Commands Supported and Effects 00:11:56.848 ============================== 00:11:56.848 Admin Commands 00:11:56.848 -------------- 00:11:56.848 Delete I/O Submission Queue (00h): Supported 00:11:56.848 Create I/O Submission Queue (01h): Supported 00:11:56.848 Get Log Page (02h): Supported 00:11:56.848 Delete I/O Completion Queue (04h): Supported 00:11:56.848 Create I/O Completion Queue (05h): Supported 00:11:56.848 Identify (06h): Supported 00:11:56.848 Abort (08h): Supported 00:11:56.848 Set Features (09h): Supported 00:11:56.848 Get Features (0Ah): Supported 00:11:56.848 Asynchronous Event Request (0Ch): Supported 00:11:56.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:56.848 Directive Send (19h): Supported 00:11:56.848 Directive Receive (1Ah): Supported 00:11:56.848 Virtualization Management (1Ch): Supported 00:11:56.848 Doorbell Buffer Config (7Ch): Supported 00:11:56.848 Format NVM (80h): Supported LBA-Change 00:11:56.848 I/O Commands 00:11:56.848 ------------ 00:11:56.848 Flush (00h): Supported LBA-Change 00:11:56.848 Write (01h): Supported LBA-Change 00:11:56.848 Read (02h): Supported 00:11:56.848 Compare (05h): Supported 00:11:56.848 Write Zeroes (08h): Supported LBA-Change 00:11:56.848 Dataset Management (09h): Supported LBA-Change 00:11:56.848 Unknown (0Ch): Supported 00:11:56.848 Unknown (12h): Supported 00:11:56.848 Copy (19h): Supported LBA-Change 00:11:56.848 Unknown (1Dh): Supported LBA-Change 00:11:56.848 00:11:56.848 Error Log 00:11:56.848 ========= 00:11:56.848 00:11:56.848 Arbitration 00:11:56.848 =========== 00:11:56.848 Arbitration Burst: no limit 00:11:56.848 00:11:56.848 Power Management 00:11:56.848 ================ 00:11:56.848 Number of Power States: 1 00:11:56.848 Current Power State: Power State #0 00:11:56.848 Power State #0: 00:11:56.848 Max Power: 25.00 W 00:11:56.848 Non-Operational State: Operational 00:11:56.848 Entry Latency: 16 microseconds 00:11:56.848 Exit Latency: 4 microseconds 00:11:56.848 Relative Read Throughput: 0 00:11:56.848 Relative Read Latency: 0 00:11:56.848 Relative Write Throughput: 0 00:11:56.848 Relative Write Latency: 0 00:11:56.848 Idle Power: Not Reported 00:11:56.848 Active Power: Not Reported 00:11:56.848 Non-Operational Permissive Mode: Not Supported 00:11:56.848 00:11:56.848 Health Information 00:11:56.848 ================== 00:11:56.848 Critical Warnings: 00:11:56.848 Available Spare Space: OK 00:11:56.848 Temperature: OK 00:11:56.848 Device Reliability: OK 00:11:56.848 Read Only: No 00:11:56.848 Volatile Memory Backup: OK 00:11:56.848 Current Temperature: 323 Kelvin (50 Celsius) 00:11:56.848 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:56.848 Available Spare: 0% 00:11:56.848 Available Spare Threshold: 0% 00:11:56.848 Life Percentage Used: 0% 00:11:56.848 Data Units Read: 2240 00:11:56.848 Data Units Written: 2027 00:11:56.848 Host Read Commands: 97598 00:11:56.848 Host Write Commands: 95867 00:11:56.848 Controller Busy Time: 0 minutes 00:11:56.848 Power Cycles: 0 00:11:56.848 Power On Hours: 0 hours 00:11:56.848 Unsafe Shutdowns: 0 00:11:56.848 Unrecoverable Media Errors: 0 00:11:56.849 Lifetime Error Log Entries: 0 00:11:56.849 Warning Temperature Time: 0 minutes 00:11:56.849 Critical Temperature Time: 0 minutes 00:11:56.849 00:11:56.849 Number of Queues 00:11:56.849 ================ 00:11:56.849 Number of I/O Submission Queues: 64 00:11:56.849 Number of I/O Completion Queues: 64 00:11:56.849 00:11:56.849 ZNS Specific Controller Data 00:11:56.849 ============================ 00:11:56.849 Zone Append Size Limit: 0 00:11:56.849 00:11:56.849 00:11:56.849 Active Namespaces 00:11:56.849 ================= 00:11:56.849 Namespace ID:1 00:11:56.849 Error Recovery Timeout: Unlimited 00:11:56.849 Command Set Identifier: NVM (00h) 00:11:56.849 Deallocate: Supported 00:11:56.849 Deallocated/Unwritten Error: Supported 00:11:56.849 Deallocated Read Value: All 0x00 00:11:56.849 Deallocate in Write Zeroes: Not Supported 00:11:56.849 Deallocated Guard Field: 0xFFFF 00:11:56.849 Flush: Supported 00:11:56.849 Reservation: Not Supported 00:11:56.849 Namespace Sharing Capabilities: Private 00:11:56.849 Size (in LBAs): 1048576 (4GiB) 00:11:56.849 Capacity (in LBAs): 1048576 (4GiB) 00:11:56.849 Utilization (in LBAs): 1048576 (4GiB) 00:11:56.849 Thin Provisioning: Not Supported 00:11:56.849 Per-NS Atomic Units: No 00:11:56.849 Maximum Single Source Range Length: 128 00:11:56.849 Maximum Copy Length: 128 00:11:56.849 Maximum Source Range Count: 128 00:11:56.849 NGUID/EUI64 Never Reused: No 00:11:56.849 Namespace Write Protected: No 00:11:56.849 Number of LBA Formats: 8 00:11:56.849 Current LBA Format: LBA Format #04 00:11:56.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.849 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:56.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:56.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:56.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:56.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:56.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:56.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:56.849 00:11:56.849 NVM Specific Namespace Data 00:11:56.849 =========================== 00:11:56.849 Logical Block Storage Tag Mask: 0 00:11:56.849 Protection Information Capabilities: 00:11:56.849 16b Guard Protection Information Storage Tag Support: No 00:11:56.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:56.849 Storage Tag Check Read Support: No 00:11:56.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Namespace ID:2 00:11:56.849 Error Recovery Timeout: Unlimited 00:11:56.849 Command Set Identifier: NVM (00h) 00:11:56.849 Deallocate: Supported 00:11:56.849 Deallocated/Unwritten Error: Supported 00:11:56.849 Deallocated Read Value: All 0x00 00:11:56.849 Deallocate in Write Zeroes: Not Supported 00:11:56.849 Deallocated Guard Field: 0xFFFF 00:11:56.849 Flush: Supported 00:11:56.849 Reservation: Not Supported 00:11:56.849 Namespace Sharing Capabilities: Private 00:11:56.849 Size (in LBAs): 1048576 (4GiB) 00:11:56.849 Capacity (in LBAs): 1048576 (4GiB) 00:11:56.849 Utilization (in LBAs): 1048576 (4GiB) 00:11:56.849 Thin Provisioning: Not Supported 00:11:56.849 Per-NS Atomic Units: No 00:11:56.849 Maximum Single Source Range Length: 128 00:11:56.849 Maximum Copy Length: 128 00:11:56.849 Maximum Source Range Count: 128 00:11:56.849 NGUID/EUI64 Never Reused: No 00:11:56.849 Namespace Write Protected: No 00:11:56.849 Number of LBA Formats: 8 00:11:56.849 Current LBA Format: LBA Format #04 00:11:56.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.849 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:56.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:56.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:56.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:56.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:56.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:56.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:56.849 00:11:56.849 NVM Specific Namespace Data 00:11:56.849 =========================== 00:11:56.849 Logical Block Storage Tag Mask: 0 00:11:56.849 Protection Information Capabilities: 00:11:56.849 16b Guard Protection Information Storage Tag Support: No 00:11:56.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:56.849 Storage Tag Check Read Support: No 00:11:56.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Namespace ID:3 00:11:56.849 Error Recovery Timeout: Unlimited 00:11:56.849 Command Set Identifier: NVM (00h) 00:11:56.849 Deallocate: Supported 00:11:56.849 Deallocated/Unwritten Error: Supported 00:11:56.849 Deallocated Read Value: All 0x00 00:11:56.849 Deallocate in Write Zeroes: Not Supported 00:11:56.849 Deallocated Guard Field: 0xFFFF 00:11:56.849 Flush: Supported 00:11:56.849 Reservation: Not Supported 00:11:56.849 Namespace Sharing Capabilities: Private 00:11:56.849 Size (in LBAs): 1048576 (4GiB) 00:11:56.849 Capacity (in LBAs): 1048576 (4GiB) 00:11:56.849 Utilization (in LBAs): 1048576 (4GiB) 00:11:56.849 Thin Provisioning: Not Supported 00:11:56.849 Per-NS Atomic Units: No 00:11:56.849 Maximum Single Source Range Length: 128 00:11:56.849 Maximum Copy Length: 128 00:11:56.849 Maximum Source Range Count: 128 00:11:56.849 NGUID/EUI64 Never Reused: No 00:11:56.849 Namespace Write Protected: No 00:11:56.849 Number of LBA Formats: 8 00:11:56.849 Current LBA Format: LBA Format #04 00:11:56.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:56.849 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:56.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:56.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:56.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:56.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:56.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:56.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:56.849 00:11:56.849 NVM Specific Namespace Data 00:11:56.849 =========================== 00:11:56.849 Logical Block Storage Tag Mask: 0 00:11:56.849 Protection Information Capabilities: 00:11:56.849 16b Guard Protection Information Storage Tag Support: No 00:11:56.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:56.849 Storage Tag Check Read Support: No 00:11:56.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:56.849 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:56.849 09:08:51 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:57.109 ===================================================== 00:11:57.109 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:57.109 ===================================================== 00:11:57.109 Controller Capabilities/Features 00:11:57.109 ================================ 00:11:57.109 Vendor ID: 1b36 00:11:57.109 Subsystem Vendor ID: 1af4 00:11:57.109 Serial Number: 12340 00:11:57.109 Model Number: QEMU NVMe Ctrl 00:11:57.109 Firmware Version: 8.0.0 00:11:57.109 Recommended Arb Burst: 6 00:11:57.109 IEEE OUI Identifier: 00 54 52 00:11:57.109 Multi-path I/O 00:11:57.109 May have multiple subsystem ports: No 00:11:57.109 May have multiple controllers: No 00:11:57.109 Associated with SR-IOV VF: No 00:11:57.109 Max Data Transfer Size: 524288 00:11:57.109 Max Number of Namespaces: 256 00:11:57.109 Max Number of I/O Queues: 64 00:11:57.109 NVMe Specification Version (VS): 1.4 00:11:57.109 NVMe Specification Version (Identify): 1.4 00:11:57.109 Maximum Queue Entries: 2048 00:11:57.109 Contiguous Queues Required: Yes 00:11:57.109 Arbitration Mechanisms Supported 00:11:57.109 Weighted Round Robin: Not Supported 00:11:57.109 Vendor Specific: Not Supported 00:11:57.109 Reset Timeout: 7500 ms 00:11:57.109 Doorbell Stride: 4 bytes 00:11:57.109 NVM Subsystem Reset: Not Supported 00:11:57.109 Command Sets Supported 00:11:57.109 NVM Command Set: Supported 00:11:57.109 Boot Partition: Not Supported 00:11:57.109 Memory Page Size Minimum: 4096 bytes 00:11:57.109 Memory Page Size Maximum: 65536 bytes 00:11:57.109 Persistent Memory Region: Not Supported 00:11:57.109 Optional Asynchronous Events Supported 00:11:57.109 Namespace Attribute Notices: Supported 00:11:57.109 Firmware Activation Notices: Not Supported 00:11:57.109 ANA Change Notices: Not Supported 00:11:57.109 PLE Aggregate Log Change Notices: Not Supported 00:11:57.109 LBA Status Info Alert Notices: Not Supported 00:11:57.109 EGE Aggregate Log Change Notices: Not Supported 00:11:57.109 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.109 Zone Descriptor Change Notices: Not Supported 00:11:57.109 Discovery Log Change Notices: Not Supported 00:11:57.109 Controller Attributes 00:11:57.109 128-bit Host Identifier: Not Supported 00:11:57.109 Non-Operational Permissive Mode: Not Supported 00:11:57.109 NVM Sets: Not Supported 00:11:57.109 Read Recovery Levels: Not Supported 00:11:57.109 Endurance Groups: Not Supported 00:11:57.109 Predictable Latency Mode: Not Supported 00:11:57.109 Traffic Based Keep ALive: Not Supported 00:11:57.109 Namespace Granularity: Not Supported 00:11:57.109 SQ Associations: Not Supported 00:11:57.109 UUID List: Not Supported 00:11:57.109 Multi-Domain Subsystem: Not Supported 00:11:57.109 Fixed Capacity Management: Not Supported 00:11:57.109 Variable Capacity Management: Not Supported 00:11:57.109 Delete Endurance Group: Not Supported 00:11:57.109 Delete NVM Set: Not Supported 00:11:57.109 Extended LBA Formats Supported: Supported 00:11:57.109 Flexible Data Placement Supported: Not Supported 00:11:57.109 00:11:57.109 Controller Memory Buffer Support 00:11:57.109 ================================ 00:11:57.109 Supported: No 00:11:57.109 00:11:57.109 Persistent Memory Region Support 00:11:57.109 ================================ 00:11:57.109 Supported: No 00:11:57.109 00:11:57.109 Admin Command Set Attributes 00:11:57.109 ============================ 00:11:57.109 Security Send/Receive: Not Supported 00:11:57.109 Format NVM: Supported 00:11:57.109 Firmware Activate/Download: Not Supported 00:11:57.109 Namespace Management: Supported 00:11:57.109 Device Self-Test: Not Supported 00:11:57.109 Directives: Supported 00:11:57.109 NVMe-MI: Not Supported 00:11:57.109 Virtualization Management: Not Supported 00:11:57.109 Doorbell Buffer Config: Supported 00:11:57.109 Get LBA Status Capability: Not Supported 00:11:57.109 Command & Feature Lockdown Capability: Not Supported 00:11:57.109 Abort Command Limit: 4 00:11:57.109 Async Event Request Limit: 4 00:11:57.109 Number of Firmware Slots: N/A 00:11:57.110 Firmware Slot 1 Read-Only: N/A 00:11:57.110 Firmware Activation Without Reset: N/A 00:11:57.110 Multiple Update Detection Support: N/A 00:11:57.110 Firmware Update Granularity: No Information Provided 00:11:57.110 Per-Namespace SMART Log: Yes 00:11:57.110 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.110 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:57.110 Command Effects Log Page: Supported 00:11:57.110 Get Log Page Extended Data: Supported 00:11:57.110 Telemetry Log Pages: Not Supported 00:11:57.110 Persistent Event Log Pages: Not Supported 00:11:57.110 Supported Log Pages Log Page: May Support 00:11:57.110 Commands Supported & Effects Log Page: Not Supported 00:11:57.110 Feature Identifiers & Effects Log Page:May Support 00:11:57.110 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.110 Data Area 4 for Telemetry Log: Not Supported 00:11:57.110 Error Log Page Entries Supported: 1 00:11:57.110 Keep Alive: Not Supported 00:11:57.110 00:11:57.110 NVM Command Set Attributes 00:11:57.110 ========================== 00:11:57.110 Submission Queue Entry Size 00:11:57.110 Max: 64 00:11:57.110 Min: 64 00:11:57.110 Completion Queue Entry Size 00:11:57.110 Max: 16 00:11:57.110 Min: 16 00:11:57.110 Number of Namespaces: 256 00:11:57.110 Compare Command: Supported 00:11:57.110 Write Uncorrectable Command: Not Supported 00:11:57.110 Dataset Management Command: Supported 00:11:57.110 Write Zeroes Command: Supported 00:11:57.110 Set Features Save Field: Supported 00:11:57.110 Reservations: Not Supported 00:11:57.110 Timestamp: Supported 00:11:57.110 Copy: Supported 00:11:57.110 Volatile Write Cache: Present 00:11:57.110 Atomic Write Unit (Normal): 1 00:11:57.110 Atomic Write Unit (PFail): 1 00:11:57.110 Atomic Compare & Write Unit: 1 00:11:57.110 Fused Compare & Write: Not Supported 00:11:57.110 Scatter-Gather List 00:11:57.110 SGL Command Set: Supported 00:11:57.110 SGL Keyed: Not Supported 00:11:57.110 SGL Bit Bucket Descriptor: Not Supported 00:11:57.110 SGL Metadata Pointer: Not Supported 00:11:57.110 Oversized SGL: Not Supported 00:11:57.110 SGL Metadata Address: Not Supported 00:11:57.110 SGL Offset: Not Supported 00:11:57.110 Transport SGL Data Block: Not Supported 00:11:57.110 Replay Protected Memory Block: Not Supported 00:11:57.110 00:11:57.110 Firmware Slot Information 00:11:57.110 ========================= 00:11:57.110 Active slot: 1 00:11:57.110 Slot 1 Firmware Revision: 1.0 00:11:57.110 00:11:57.110 00:11:57.110 Commands Supported and Effects 00:11:57.110 ============================== 00:11:57.110 Admin Commands 00:11:57.110 -------------- 00:11:57.110 Delete I/O Submission Queue (00h): Supported 00:11:57.110 Create I/O Submission Queue (01h): Supported 00:11:57.110 Get Log Page (02h): Supported 00:11:57.110 Delete I/O Completion Queue (04h): Supported 00:11:57.110 Create I/O Completion Queue (05h): Supported 00:11:57.110 Identify (06h): Supported 00:11:57.110 Abort (08h): Supported 00:11:57.110 Set Features (09h): Supported 00:11:57.110 Get Features (0Ah): Supported 00:11:57.110 Asynchronous Event Request (0Ch): Supported 00:11:57.110 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.110 Directive Send (19h): Supported 00:11:57.110 Directive Receive (1Ah): Supported 00:11:57.110 Virtualization Management (1Ch): Supported 00:11:57.110 Doorbell Buffer Config (7Ch): Supported 00:11:57.110 Format NVM (80h): Supported LBA-Change 00:11:57.110 I/O Commands 00:11:57.110 ------------ 00:11:57.110 Flush (00h): Supported LBA-Change 00:11:57.110 Write (01h): Supported LBA-Change 00:11:57.110 Read (02h): Supported 00:11:57.110 Compare (05h): Supported 00:11:57.110 Write Zeroes (08h): Supported LBA-Change 00:11:57.110 Dataset Management (09h): Supported LBA-Change 00:11:57.110 Unknown (0Ch): Supported 00:11:57.110 Unknown (12h): Supported 00:11:57.110 Copy (19h): Supported LBA-Change 00:11:57.110 Unknown (1Dh): Supported LBA-Change 00:11:57.110 00:11:57.110 Error Log 00:11:57.110 ========= 00:11:57.110 00:11:57.110 Arbitration 00:11:57.110 =========== 00:11:57.110 Arbitration Burst: no limit 00:11:57.110 00:11:57.110 Power Management 00:11:57.110 ================ 00:11:57.110 Number of Power States: 1 00:11:57.110 Current Power State: Power State #0 00:11:57.110 Power State #0: 00:11:57.110 Max Power: 25.00 W 00:11:57.110 Non-Operational State: Operational 00:11:57.110 Entry Latency: 16 microseconds 00:11:57.110 Exit Latency: 4 microseconds 00:11:57.110 Relative Read Throughput: 0 00:11:57.110 Relative Read Latency: 0 00:11:57.110 Relative Write Throughput: 0 00:11:57.110 Relative Write Latency: 0 00:11:57.110 Idle Power: Not Reported 00:11:57.110 Active Power: Not Reported 00:11:57.110 Non-Operational Permissive Mode: Not Supported 00:11:57.110 00:11:57.110 Health Information 00:11:57.110 ================== 00:11:57.110 Critical Warnings: 00:11:57.110 Available Spare Space: OK 00:11:57.110 Temperature: OK 00:11:57.110 Device Reliability: OK 00:11:57.110 Read Only: No 00:11:57.110 Volatile Memory Backup: OK 00:11:57.110 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.110 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.110 Available Spare: 0% 00:11:57.110 Available Spare Threshold: 0% 00:11:57.110 Life Percentage Used: 0% 00:11:57.110 Data Units Read: 703 00:11:57.110 Data Units Written: 631 00:11:57.110 Host Read Commands: 32028 00:11:57.110 Host Write Commands: 31814 00:11:57.110 Controller Busy Time: 0 minutes 00:11:57.110 Power Cycles: 0 00:11:57.110 Power On Hours: 0 hours 00:11:57.110 Unsafe Shutdowns: 0 00:11:57.110 Unrecoverable Media Errors: 0 00:11:57.110 Lifetime Error Log Entries: 0 00:11:57.110 Warning Temperature Time: 0 minutes 00:11:57.110 Critical Temperature Time: 0 minutes 00:11:57.110 00:11:57.110 Number of Queues 00:11:57.110 ================ 00:11:57.110 Number of I/O Submission Queues: 64 00:11:57.110 Number of I/O Completion Queues: 64 00:11:57.110 00:11:57.110 ZNS Specific Controller Data 00:11:57.110 ============================ 00:11:57.110 Zone Append Size Limit: 0 00:11:57.110 00:11:57.110 00:11:57.110 Active Namespaces 00:11:57.110 ================= 00:11:57.110 Namespace ID:1 00:11:57.110 Error Recovery Timeout: Unlimited 00:11:57.110 Command Set Identifier: NVM (00h) 00:11:57.110 Deallocate: Supported 00:11:57.110 Deallocated/Unwritten Error: Supported 00:11:57.110 Deallocated Read Value: All 0x00 00:11:57.110 Deallocate in Write Zeroes: Not Supported 00:11:57.110 Deallocated Guard Field: 0xFFFF 00:11:57.110 Flush: Supported 00:11:57.110 Reservation: Not Supported 00:11:57.110 Metadata Transferred as: Separate Metadata Buffer 00:11:57.110 Namespace Sharing Capabilities: Private 00:11:57.110 Size (in LBAs): 1548666 (5GiB) 00:11:57.110 Capacity (in LBAs): 1548666 (5GiB) 00:11:57.110 Utilization (in LBAs): 1548666 (5GiB) 00:11:57.110 Thin Provisioning: Not Supported 00:11:57.110 Per-NS Atomic Units: No 00:11:57.110 Maximum Single Source Range Length: 128 00:11:57.110 Maximum Copy Length: 128 00:11:57.110 Maximum Source Range Count: 128 00:11:57.110 NGUID/EUI64 Never Reused: No 00:11:57.110 Namespace Write Protected: No 00:11:57.110 Number of LBA Formats: 8 00:11:57.110 Current LBA Format: LBA Format #07 00:11:57.110 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.110 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.110 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.110 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.110 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.110 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.110 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.110 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.110 00:11:57.110 NVM Specific Namespace Data 00:11:57.110 =========================== 00:11:57.110 Logical Block Storage Tag Mask: 0 00:11:57.110 Protection Information Capabilities: 00:11:57.110 16b Guard Protection Information Storage Tag Support: No 00:11:57.110 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:57.110 Storage Tag Check Read Support: No 00:11:57.110 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.111 09:08:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:57.111 09:08:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:57.370 ===================================================== 00:11:57.370 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:57.370 ===================================================== 00:11:57.370 Controller Capabilities/Features 00:11:57.370 ================================ 00:11:57.370 Vendor ID: 1b36 00:11:57.370 Subsystem Vendor ID: 1af4 00:11:57.370 Serial Number: 12341 00:11:57.370 Model Number: QEMU NVMe Ctrl 00:11:57.370 Firmware Version: 8.0.0 00:11:57.370 Recommended Arb Burst: 6 00:11:57.370 IEEE OUI Identifier: 00 54 52 00:11:57.370 Multi-path I/O 00:11:57.370 May have multiple subsystem ports: No 00:11:57.370 May have multiple controllers: No 00:11:57.370 Associated with SR-IOV VF: No 00:11:57.370 Max Data Transfer Size: 524288 00:11:57.370 Max Number of Namespaces: 256 00:11:57.370 Max Number of I/O Queues: 64 00:11:57.370 NVMe Specification Version (VS): 1.4 00:11:57.370 NVMe Specification Version (Identify): 1.4 00:11:57.370 Maximum Queue Entries: 2048 00:11:57.370 Contiguous Queues Required: Yes 00:11:57.370 Arbitration Mechanisms Supported 00:11:57.370 Weighted Round Robin: Not Supported 00:11:57.370 Vendor Specific: Not Supported 00:11:57.370 Reset Timeout: 7500 ms 00:11:57.370 Doorbell Stride: 4 bytes 00:11:57.370 NVM Subsystem Reset: Not Supported 00:11:57.370 Command Sets Supported 00:11:57.370 NVM Command Set: Supported 00:11:57.370 Boot Partition: Not Supported 00:11:57.370 Memory Page Size Minimum: 4096 bytes 00:11:57.370 Memory Page Size Maximum: 65536 bytes 00:11:57.370 Persistent Memory Region: Not Supported 00:11:57.370 Optional Asynchronous Events Supported 00:11:57.370 Namespace Attribute Notices: Supported 00:11:57.370 Firmware Activation Notices: Not Supported 00:11:57.370 ANA Change Notices: Not Supported 00:11:57.370 PLE Aggregate Log Change Notices: Not Supported 00:11:57.370 LBA Status Info Alert Notices: Not Supported 00:11:57.370 EGE Aggregate Log Change Notices: Not Supported 00:11:57.370 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.370 Zone Descriptor Change Notices: Not Supported 00:11:57.370 Discovery Log Change Notices: Not Supported 00:11:57.370 Controller Attributes 00:11:57.370 128-bit Host Identifier: Not Supported 00:11:57.370 Non-Operational Permissive Mode: Not Supported 00:11:57.370 NVM Sets: Not Supported 00:11:57.370 Read Recovery Levels: Not Supported 00:11:57.370 Endurance Groups: Not Supported 00:11:57.370 Predictable Latency Mode: Not Supported 00:11:57.370 Traffic Based Keep ALive: Not Supported 00:11:57.370 Namespace Granularity: Not Supported 00:11:57.370 SQ Associations: Not Supported 00:11:57.370 UUID List: Not Supported 00:11:57.370 Multi-Domain Subsystem: Not Supported 00:11:57.370 Fixed Capacity Management: Not Supported 00:11:57.370 Variable Capacity Management: Not Supported 00:11:57.370 Delete Endurance Group: Not Supported 00:11:57.370 Delete NVM Set: Not Supported 00:11:57.370 Extended LBA Formats Supported: Supported 00:11:57.370 Flexible Data Placement Supported: Not Supported 00:11:57.370 00:11:57.370 Controller Memory Buffer Support 00:11:57.370 ================================ 00:11:57.370 Supported: No 00:11:57.370 00:11:57.370 Persistent Memory Region Support 00:11:57.370 ================================ 00:11:57.370 Supported: No 00:11:57.370 00:11:57.370 Admin Command Set Attributes 00:11:57.370 ============================ 00:11:57.370 Security Send/Receive: Not Supported 00:11:57.370 Format NVM: Supported 00:11:57.370 Firmware Activate/Download: Not Supported 00:11:57.370 Namespace Management: Supported 00:11:57.370 Device Self-Test: Not Supported 00:11:57.370 Directives: Supported 00:11:57.370 NVMe-MI: Not Supported 00:11:57.370 Virtualization Management: Not Supported 00:11:57.370 Doorbell Buffer Config: Supported 00:11:57.370 Get LBA Status Capability: Not Supported 00:11:57.370 Command & Feature Lockdown Capability: Not Supported 00:11:57.370 Abort Command Limit: 4 00:11:57.370 Async Event Request Limit: 4 00:11:57.370 Number of Firmware Slots: N/A 00:11:57.370 Firmware Slot 1 Read-Only: N/A 00:11:57.370 Firmware Activation Without Reset: N/A 00:11:57.370 Multiple Update Detection Support: N/A 00:11:57.370 Firmware Update Granularity: No Information Provided 00:11:57.370 Per-Namespace SMART Log: Yes 00:11:57.370 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.370 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:57.370 Command Effects Log Page: Supported 00:11:57.370 Get Log Page Extended Data: Supported 00:11:57.370 Telemetry Log Pages: Not Supported 00:11:57.370 Persistent Event Log Pages: Not Supported 00:11:57.370 Supported Log Pages Log Page: May Support 00:11:57.370 Commands Supported & Effects Log Page: Not Supported 00:11:57.370 Feature Identifiers & Effects Log Page:May Support 00:11:57.370 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.370 Data Area 4 for Telemetry Log: Not Supported 00:11:57.370 Error Log Page Entries Supported: 1 00:11:57.370 Keep Alive: Not Supported 00:11:57.370 00:11:57.370 NVM Command Set Attributes 00:11:57.370 ========================== 00:11:57.370 Submission Queue Entry Size 00:11:57.370 Max: 64 00:11:57.370 Min: 64 00:11:57.370 Completion Queue Entry Size 00:11:57.370 Max: 16 00:11:57.370 Min: 16 00:11:57.370 Number of Namespaces: 256 00:11:57.370 Compare Command: Supported 00:11:57.370 Write Uncorrectable Command: Not Supported 00:11:57.370 Dataset Management Command: Supported 00:11:57.370 Write Zeroes Command: Supported 00:11:57.370 Set Features Save Field: Supported 00:11:57.370 Reservations: Not Supported 00:11:57.370 Timestamp: Supported 00:11:57.370 Copy: Supported 00:11:57.370 Volatile Write Cache: Present 00:11:57.370 Atomic Write Unit (Normal): 1 00:11:57.370 Atomic Write Unit (PFail): 1 00:11:57.370 Atomic Compare & Write Unit: 1 00:11:57.370 Fused Compare & Write: Not Supported 00:11:57.370 Scatter-Gather List 00:11:57.370 SGL Command Set: Supported 00:11:57.370 SGL Keyed: Not Supported 00:11:57.370 SGL Bit Bucket Descriptor: Not Supported 00:11:57.370 SGL Metadata Pointer: Not Supported 00:11:57.370 Oversized SGL: Not Supported 00:11:57.370 SGL Metadata Address: Not Supported 00:11:57.370 SGL Offset: Not Supported 00:11:57.370 Transport SGL Data Block: Not Supported 00:11:57.371 Replay Protected Memory Block: Not Supported 00:11:57.371 00:11:57.371 Firmware Slot Information 00:11:57.371 ========================= 00:11:57.371 Active slot: 1 00:11:57.371 Slot 1 Firmware Revision: 1.0 00:11:57.371 00:11:57.371 00:11:57.371 Commands Supported and Effects 00:11:57.371 ============================== 00:11:57.371 Admin Commands 00:11:57.371 -------------- 00:11:57.371 Delete I/O Submission Queue (00h): Supported 00:11:57.371 Create I/O Submission Queue (01h): Supported 00:11:57.371 Get Log Page (02h): Supported 00:11:57.371 Delete I/O Completion Queue (04h): Supported 00:11:57.371 Create I/O Completion Queue (05h): Supported 00:11:57.371 Identify (06h): Supported 00:11:57.371 Abort (08h): Supported 00:11:57.371 Set Features (09h): Supported 00:11:57.371 Get Features (0Ah): Supported 00:11:57.371 Asynchronous Event Request (0Ch): Supported 00:11:57.371 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.371 Directive Send (19h): Supported 00:11:57.371 Directive Receive (1Ah): Supported 00:11:57.371 Virtualization Management (1Ch): Supported 00:11:57.371 Doorbell Buffer Config (7Ch): Supported 00:11:57.371 Format NVM (80h): Supported LBA-Change 00:11:57.371 I/O Commands 00:11:57.371 ------------ 00:11:57.371 Flush (00h): Supported LBA-Change 00:11:57.371 Write (01h): Supported LBA-Change 00:11:57.371 Read (02h): Supported 00:11:57.371 Compare (05h): Supported 00:11:57.371 Write Zeroes (08h): Supported LBA-Change 00:11:57.371 Dataset Management (09h): Supported LBA-Change 00:11:57.371 Unknown (0Ch): Supported 00:11:57.371 Unknown (12h): Supported 00:11:57.371 Copy (19h): Supported LBA-Change 00:11:57.371 Unknown (1Dh): Supported LBA-Change 00:11:57.371 00:11:57.371 Error Log 00:11:57.371 ========= 00:11:57.371 00:11:57.371 Arbitration 00:11:57.371 =========== 00:11:57.371 Arbitration Burst: no limit 00:11:57.371 00:11:57.371 Power Management 00:11:57.371 ================ 00:11:57.371 Number of Power States: 1 00:11:57.371 Current Power State: Power State #0 00:11:57.371 Power State #0: 00:11:57.371 Max Power: 25.00 W 00:11:57.371 Non-Operational State: Operational 00:11:57.371 Entry Latency: 16 microseconds 00:11:57.371 Exit Latency: 4 microseconds 00:11:57.371 Relative Read Throughput: 0 00:11:57.371 Relative Read Latency: 0 00:11:57.371 Relative Write Throughput: 0 00:11:57.371 Relative Write Latency: 0 00:11:57.371 Idle Power: Not Reported 00:11:57.371 Active Power: Not Reported 00:11:57.371 Non-Operational Permissive Mode: Not Supported 00:11:57.371 00:11:57.371 Health Information 00:11:57.371 ================== 00:11:57.371 Critical Warnings: 00:11:57.371 Available Spare Space: OK 00:11:57.371 Temperature: OK 00:11:57.371 Device Reliability: OK 00:11:57.371 Read Only: No 00:11:57.371 Volatile Memory Backup: OK 00:11:57.371 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.371 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.371 Available Spare: 0% 00:11:57.371 Available Spare Threshold: 0% 00:11:57.371 Life Percentage Used: 0% 00:11:57.371 Data Units Read: 1092 00:11:57.371 Data Units Written: 959 00:11:57.371 Host Read Commands: 47835 00:11:57.371 Host Write Commands: 46623 00:11:57.371 Controller Busy Time: 0 minutes 00:11:57.371 Power Cycles: 0 00:11:57.371 Power On Hours: 0 hours 00:11:57.371 Unsafe Shutdowns: 0 00:11:57.371 Unrecoverable Media Errors: 0 00:11:57.371 Lifetime Error Log Entries: 0 00:11:57.371 Warning Temperature Time: 0 minutes 00:11:57.371 Critical Temperature Time: 0 minutes 00:11:57.371 00:11:57.371 Number of Queues 00:11:57.371 ================ 00:11:57.371 Number of I/O Submission Queues: 64 00:11:57.371 Number of I/O Completion Queues: 64 00:11:57.371 00:11:57.371 ZNS Specific Controller Data 00:11:57.371 ============================ 00:11:57.371 Zone Append Size Limit: 0 00:11:57.371 00:11:57.371 00:11:57.371 Active Namespaces 00:11:57.371 ================= 00:11:57.371 Namespace ID:1 00:11:57.371 Error Recovery Timeout: Unlimited 00:11:57.371 Command Set Identifier: NVM (00h) 00:11:57.371 Deallocate: Supported 00:11:57.371 Deallocated/Unwritten Error: Supported 00:11:57.371 Deallocated Read Value: All 0x00 00:11:57.371 Deallocate in Write Zeroes: Not Supported 00:11:57.371 Deallocated Guard Field: 0xFFFF 00:11:57.371 Flush: Supported 00:11:57.371 Reservation: Not Supported 00:11:57.371 Namespace Sharing Capabilities: Private 00:11:57.371 Size (in LBAs): 1310720 (5GiB) 00:11:57.371 Capacity (in LBAs): 1310720 (5GiB) 00:11:57.371 Utilization (in LBAs): 1310720 (5GiB) 00:11:57.371 Thin Provisioning: Not Supported 00:11:57.371 Per-NS Atomic Units: No 00:11:57.371 Maximum Single Source Range Length: 128 00:11:57.371 Maximum Copy Length: 128 00:11:57.371 Maximum Source Range Count: 128 00:11:57.371 NGUID/EUI64 Never Reused: No 00:11:57.371 Namespace Write Protected: No 00:11:57.371 Number of LBA Formats: 8 00:11:57.371 Current LBA Format: LBA Format #04 00:11:57.371 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.371 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.371 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.371 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.371 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.371 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.371 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.371 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.371 00:11:57.371 NVM Specific Namespace Data 00:11:57.371 =========================== 00:11:57.371 Logical Block Storage Tag Mask: 0 00:11:57.371 Protection Information Capabilities: 00:11:57.371 16b Guard Protection Information Storage Tag Support: No 00:11:57.371 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:57.371 Storage Tag Check Read Support: No 00:11:57.371 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.371 09:08:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:57.371 09:08:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:57.631 ===================================================== 00:11:57.631 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:57.631 ===================================================== 00:11:57.631 Controller Capabilities/Features 00:11:57.631 ================================ 00:11:57.631 Vendor ID: 1b36 00:11:57.631 Subsystem Vendor ID: 1af4 00:11:57.631 Serial Number: 12342 00:11:57.631 Model Number: QEMU NVMe Ctrl 00:11:57.631 Firmware Version: 8.0.0 00:11:57.631 Recommended Arb Burst: 6 00:11:57.631 IEEE OUI Identifier: 00 54 52 00:11:57.631 Multi-path I/O 00:11:57.631 May have multiple subsystem ports: No 00:11:57.631 May have multiple controllers: No 00:11:57.631 Associated with SR-IOV VF: No 00:11:57.631 Max Data Transfer Size: 524288 00:11:57.631 Max Number of Namespaces: 256 00:11:57.631 Max Number of I/O Queues: 64 00:11:57.631 NVMe Specification Version (VS): 1.4 00:11:57.631 NVMe Specification Version (Identify): 1.4 00:11:57.631 Maximum Queue Entries: 2048 00:11:57.631 Contiguous Queues Required: Yes 00:11:57.631 Arbitration Mechanisms Supported 00:11:57.631 Weighted Round Robin: Not Supported 00:11:57.631 Vendor Specific: Not Supported 00:11:57.631 Reset Timeout: 7500 ms 00:11:57.631 Doorbell Stride: 4 bytes 00:11:57.631 NVM Subsystem Reset: Not Supported 00:11:57.631 Command Sets Supported 00:11:57.631 NVM Command Set: Supported 00:11:57.631 Boot Partition: Not Supported 00:11:57.631 Memory Page Size Minimum: 4096 bytes 00:11:57.631 Memory Page Size Maximum: 65536 bytes 00:11:57.631 Persistent Memory Region: Not Supported 00:11:57.631 Optional Asynchronous Events Supported 00:11:57.631 Namespace Attribute Notices: Supported 00:11:57.631 Firmware Activation Notices: Not Supported 00:11:57.631 ANA Change Notices: Not Supported 00:11:57.631 PLE Aggregate Log Change Notices: Not Supported 00:11:57.631 LBA Status Info Alert Notices: Not Supported 00:11:57.631 EGE Aggregate Log Change Notices: Not Supported 00:11:57.631 Normal NVM Subsystem Shutdown event: Not Supported 00:11:57.631 Zone Descriptor Change Notices: Not Supported 00:11:57.631 Discovery Log Change Notices: Not Supported 00:11:57.631 Controller Attributes 00:11:57.631 128-bit Host Identifier: Not Supported 00:11:57.631 Non-Operational Permissive Mode: Not Supported 00:11:57.631 NVM Sets: Not Supported 00:11:57.631 Read Recovery Levels: Not Supported 00:11:57.631 Endurance Groups: Not Supported 00:11:57.631 Predictable Latency Mode: Not Supported 00:11:57.631 Traffic Based Keep ALive: Not Supported 00:11:57.631 Namespace Granularity: Not Supported 00:11:57.631 SQ Associations: Not Supported 00:11:57.631 UUID List: Not Supported 00:11:57.631 Multi-Domain Subsystem: Not Supported 00:11:57.631 Fixed Capacity Management: Not Supported 00:11:57.631 Variable Capacity Management: Not Supported 00:11:57.631 Delete Endurance Group: Not Supported 00:11:57.631 Delete NVM Set: Not Supported 00:11:57.631 Extended LBA Formats Supported: Supported 00:11:57.631 Flexible Data Placement Supported: Not Supported 00:11:57.631 00:11:57.631 Controller Memory Buffer Support 00:11:57.631 ================================ 00:11:57.631 Supported: No 00:11:57.631 00:11:57.631 Persistent Memory Region Support 00:11:57.631 ================================ 00:11:57.631 Supported: No 00:11:57.631 00:11:57.631 Admin Command Set Attributes 00:11:57.631 ============================ 00:11:57.631 Security Send/Receive: Not Supported 00:11:57.631 Format NVM: Supported 00:11:57.631 Firmware Activate/Download: Not Supported 00:11:57.631 Namespace Management: Supported 00:11:57.631 Device Self-Test: Not Supported 00:11:57.631 Directives: Supported 00:11:57.631 NVMe-MI: Not Supported 00:11:57.631 Virtualization Management: Not Supported 00:11:57.631 Doorbell Buffer Config: Supported 00:11:57.631 Get LBA Status Capability: Not Supported 00:11:57.631 Command & Feature Lockdown Capability: Not Supported 00:11:57.631 Abort Command Limit: 4 00:11:57.631 Async Event Request Limit: 4 00:11:57.631 Number of Firmware Slots: N/A 00:11:57.631 Firmware Slot 1 Read-Only: N/A 00:11:57.631 Firmware Activation Without Reset: N/A 00:11:57.631 Multiple Update Detection Support: N/A 00:11:57.631 Firmware Update Granularity: No Information Provided 00:11:57.631 Per-Namespace SMART Log: Yes 00:11:57.631 Asymmetric Namespace Access Log Page: Not Supported 00:11:57.631 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:57.631 Command Effects Log Page: Supported 00:11:57.631 Get Log Page Extended Data: Supported 00:11:57.631 Telemetry Log Pages: Not Supported 00:11:57.631 Persistent Event Log Pages: Not Supported 00:11:57.631 Supported Log Pages Log Page: May Support 00:11:57.631 Commands Supported & Effects Log Page: Not Supported 00:11:57.631 Feature Identifiers & Effects Log Page:May Support 00:11:57.631 NVMe-MI Commands & Effects Log Page: May Support 00:11:57.631 Data Area 4 for Telemetry Log: Not Supported 00:11:57.631 Error Log Page Entries Supported: 1 00:11:57.631 Keep Alive: Not Supported 00:11:57.631 00:11:57.631 NVM Command Set Attributes 00:11:57.631 ========================== 00:11:57.631 Submission Queue Entry Size 00:11:57.631 Max: 64 00:11:57.631 Min: 64 00:11:57.631 Completion Queue Entry Size 00:11:57.631 Max: 16 00:11:57.631 Min: 16 00:11:57.631 Number of Namespaces: 256 00:11:57.631 Compare Command: Supported 00:11:57.631 Write Uncorrectable Command: Not Supported 00:11:57.631 Dataset Management Command: Supported 00:11:57.631 Write Zeroes Command: Supported 00:11:57.631 Set Features Save Field: Supported 00:11:57.631 Reservations: Not Supported 00:11:57.631 Timestamp: Supported 00:11:57.631 Copy: Supported 00:11:57.631 Volatile Write Cache: Present 00:11:57.631 Atomic Write Unit (Normal): 1 00:11:57.631 Atomic Write Unit (PFail): 1 00:11:57.631 Atomic Compare & Write Unit: 1 00:11:57.631 Fused Compare & Write: Not Supported 00:11:57.631 Scatter-Gather List 00:11:57.631 SGL Command Set: Supported 00:11:57.631 SGL Keyed: Not Supported 00:11:57.631 SGL Bit Bucket Descriptor: Not Supported 00:11:57.631 SGL Metadata Pointer: Not Supported 00:11:57.631 Oversized SGL: Not Supported 00:11:57.631 SGL Metadata Address: Not Supported 00:11:57.631 SGL Offset: Not Supported 00:11:57.631 Transport SGL Data Block: Not Supported 00:11:57.631 Replay Protected Memory Block: Not Supported 00:11:57.631 00:11:57.631 Firmware Slot Information 00:11:57.631 ========================= 00:11:57.631 Active slot: 1 00:11:57.631 Slot 1 Firmware Revision: 1.0 00:11:57.631 00:11:57.631 00:11:57.631 Commands Supported and Effects 00:11:57.631 ============================== 00:11:57.631 Admin Commands 00:11:57.631 -------------- 00:11:57.631 Delete I/O Submission Queue (00h): Supported 00:11:57.631 Create I/O Submission Queue (01h): Supported 00:11:57.631 Get Log Page (02h): Supported 00:11:57.631 Delete I/O Completion Queue (04h): Supported 00:11:57.631 Create I/O Completion Queue (05h): Supported 00:11:57.631 Identify (06h): Supported 00:11:57.631 Abort (08h): Supported 00:11:57.631 Set Features (09h): Supported 00:11:57.631 Get Features (0Ah): Supported 00:11:57.631 Asynchronous Event Request (0Ch): Supported 00:11:57.631 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:57.631 Directive Send (19h): Supported 00:11:57.631 Directive Receive (1Ah): Supported 00:11:57.631 Virtualization Management (1Ch): Supported 00:11:57.631 Doorbell Buffer Config (7Ch): Supported 00:11:57.631 Format NVM (80h): Supported LBA-Change 00:11:57.631 I/O Commands 00:11:57.631 ------------ 00:11:57.631 Flush (00h): Supported LBA-Change 00:11:57.631 Write (01h): Supported LBA-Change 00:11:57.631 Read (02h): Supported 00:11:57.631 Compare (05h): Supported 00:11:57.631 Write Zeroes (08h): Supported LBA-Change 00:11:57.631 Dataset Management (09h): Supported LBA-Change 00:11:57.631 Unknown (0Ch): Supported 00:11:57.631 Unknown (12h): Supported 00:11:57.631 Copy (19h): Supported LBA-Change 00:11:57.631 Unknown (1Dh): Supported LBA-Change 00:11:57.631 00:11:57.631 Error Log 00:11:57.631 ========= 00:11:57.631 00:11:57.631 Arbitration 00:11:57.631 =========== 00:11:57.631 Arbitration Burst: no limit 00:11:57.631 00:11:57.631 Power Management 00:11:57.631 ================ 00:11:57.631 Number of Power States: 1 00:11:57.631 Current Power State: Power State #0 00:11:57.631 Power State #0: 00:11:57.631 Max Power: 25.00 W 00:11:57.631 Non-Operational State: Operational 00:11:57.631 Entry Latency: 16 microseconds 00:11:57.632 Exit Latency: 4 microseconds 00:11:57.632 Relative Read Throughput: 0 00:11:57.632 Relative Read Latency: 0 00:11:57.632 Relative Write Throughput: 0 00:11:57.632 Relative Write Latency: 0 00:11:57.632 Idle Power: Not Reported 00:11:57.632 Active Power: Not Reported 00:11:57.632 Non-Operational Permissive Mode: Not Supported 00:11:57.632 00:11:57.632 Health Information 00:11:57.632 ================== 00:11:57.632 Critical Warnings: 00:11:57.632 Available Spare Space: OK 00:11:57.632 Temperature: OK 00:11:57.632 Device Reliability: OK 00:11:57.632 Read Only: No 00:11:57.632 Volatile Memory Backup: OK 00:11:57.632 Current Temperature: 323 Kelvin (50 Celsius) 00:11:57.632 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:57.632 Available Spare: 0% 00:11:57.632 Available Spare Threshold: 0% 00:11:57.632 Life Percentage Used: 0% 00:11:57.632 Data Units Read: 2240 00:11:57.632 Data Units Written: 2027 00:11:57.632 Host Read Commands: 97598 00:11:57.632 Host Write Commands: 95867 00:11:57.632 Controller Busy Time: 0 minutes 00:11:57.632 Power Cycles: 0 00:11:57.632 Power On Hours: 0 hours 00:11:57.632 Unsafe Shutdowns: 0 00:11:57.632 Unrecoverable Media Errors: 0 00:11:57.632 Lifetime Error Log Entries: 0 00:11:57.632 Warning Temperature Time: 0 minutes 00:11:57.632 Critical Temperature Time: 0 minutes 00:11:57.632 00:11:57.632 Number of Queues 00:11:57.632 ================ 00:11:57.632 Number of I/O Submission Queues: 64 00:11:57.632 Number of I/O Completion Queues: 64 00:11:57.632 00:11:57.632 ZNS Specific Controller Data 00:11:57.632 ============================ 00:11:57.632 Zone Append Size Limit: 0 00:11:57.632 00:11:57.632 00:11:57.632 Active Namespaces 00:11:57.632 ================= 00:11:57.632 Namespace ID:1 00:11:57.632 Error Recovery Timeout: Unlimited 00:11:57.632 Command Set Identifier: NVM (00h) 00:11:57.632 Deallocate: Supported 00:11:57.632 Deallocated/Unwritten Error: Supported 00:11:57.632 Deallocated Read Value: All 0x00 00:11:57.632 Deallocate in Write Zeroes: Not Supported 00:11:57.632 Deallocated Guard Field: 0xFFFF 00:11:57.632 Flush: Supported 00:11:57.632 Reservation: Not Supported 00:11:57.632 Namespace Sharing Capabilities: Private 00:11:57.632 Size (in LBAs): 1048576 (4GiB) 00:11:57.632 Capacity (in LBAs): 1048576 (4GiB) 00:11:57.632 Utilization (in LBAs): 1048576 (4GiB) 00:11:57.632 Thin Provisioning: Not Supported 00:11:57.632 Per-NS Atomic Units: No 00:11:57.632 Maximum Single Source Range Length: 128 00:11:57.632 Maximum Copy Length: 128 00:11:57.632 Maximum Source Range Count: 128 00:11:57.632 NGUID/EUI64 Never Reused: No 00:11:57.632 Namespace Write Protected: No 00:11:57.632 Number of LBA Formats: 8 00:11:57.632 Current LBA Format: LBA Format #04 00:11:57.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.632 00:11:57.632 NVM Specific Namespace Data 00:11:57.632 =========================== 00:11:57.632 Logical Block Storage Tag Mask: 0 00:11:57.632 Protection Information Capabilities: 00:11:57.632 16b Guard Protection Information Storage Tag Support: No 00:11:57.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:57.632 Storage Tag Check Read Support: No 00:11:57.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Namespace ID:2 00:11:57.632 Error Recovery Timeout: Unlimited 00:11:57.632 Command Set Identifier: NVM (00h) 00:11:57.632 Deallocate: Supported 00:11:57.632 Deallocated/Unwritten Error: Supported 00:11:57.632 Deallocated Read Value: All 0x00 00:11:57.632 Deallocate in Write Zeroes: Not Supported 00:11:57.632 Deallocated Guard Field: 0xFFFF 00:11:57.632 Flush: Supported 00:11:57.632 Reservation: Not Supported 00:11:57.632 Namespace Sharing Capabilities: Private 00:11:57.632 Size (in LBAs): 1048576 (4GiB) 00:11:57.632 Capacity (in LBAs): 1048576 (4GiB) 00:11:57.632 Utilization (in LBAs): 1048576 (4GiB) 00:11:57.632 Thin Provisioning: Not Supported 00:11:57.632 Per-NS Atomic Units: No 00:11:57.632 Maximum Single Source Range Length: 128 00:11:57.632 Maximum Copy Length: 128 00:11:57.632 Maximum Source Range Count: 128 00:11:57.632 NGUID/EUI64 Never Reused: No 00:11:57.632 Namespace Write Protected: No 00:11:57.632 Number of LBA Formats: 8 00:11:57.632 Current LBA Format: LBA Format #04 00:11:57.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.632 00:11:57.632 NVM Specific Namespace Data 00:11:57.632 =========================== 00:11:57.632 Logical Block Storage Tag Mask: 0 00:11:57.632 Protection Information Capabilities: 00:11:57.632 16b Guard Protection Information Storage Tag Support: No 00:11:57.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:57.632 Storage Tag Check Read Support: No 00:11:57.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Namespace ID:3 00:11:57.632 Error Recovery Timeout: Unlimited 00:11:57.632 Command Set Identifier: NVM (00h) 00:11:57.632 Deallocate: Supported 00:11:57.632 Deallocated/Unwritten Error: Supported 00:11:57.632 Deallocated Read Value: All 0x00 00:11:57.632 Deallocate in Write Zeroes: Not Supported 00:11:57.632 Deallocated Guard Field: 0xFFFF 00:11:57.632 Flush: Supported 00:11:57.632 Reservation: Not Supported 00:11:57.632 Namespace Sharing Capabilities: Private 00:11:57.632 Size (in LBAs): 1048576 (4GiB) 00:11:57.632 Capacity (in LBAs): 1048576 (4GiB) 00:11:57.632 Utilization (in LBAs): 1048576 (4GiB) 00:11:57.632 Thin Provisioning: Not Supported 00:11:57.632 Per-NS Atomic Units: No 00:11:57.632 Maximum Single Source Range Length: 128 00:11:57.632 Maximum Copy Length: 128 00:11:57.632 Maximum Source Range Count: 128 00:11:57.632 NGUID/EUI64 Never Reused: No 00:11:57.632 Namespace Write Protected: No 00:11:57.632 Number of LBA Formats: 8 00:11:57.632 Current LBA Format: LBA Format #04 00:11:57.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:57.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:57.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:57.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:57.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:57.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:57.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:57.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:57.632 00:11:57.632 NVM Specific Namespace Data 00:11:57.632 =========================== 00:11:57.632 Logical Block Storage Tag Mask: 0 00:11:57.632 Protection Information Capabilities: 00:11:57.632 16b Guard Protection Information Storage Tag Support: No 00:11:57.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:57.632 Storage Tag Check Read Support: No 00:11:57.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.633 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.633 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.633 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.633 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:57.633 09:08:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:57.633 09:08:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:58.201 ===================================================== 00:11:58.201 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:58.201 ===================================================== 00:11:58.201 Controller Capabilities/Features 00:11:58.201 ================================ 00:11:58.201 Vendor ID: 1b36 00:11:58.201 Subsystem Vendor ID: 1af4 00:11:58.201 Serial Number: 12343 00:11:58.201 Model Number: QEMU NVMe Ctrl 00:11:58.201 Firmware Version: 8.0.0 00:11:58.201 Recommended Arb Burst: 6 00:11:58.201 IEEE OUI Identifier: 00 54 52 00:11:58.201 Multi-path I/O 00:11:58.201 May have multiple subsystem ports: No 00:11:58.201 May have multiple controllers: Yes 00:11:58.201 Associated with SR-IOV VF: No 00:11:58.201 Max Data Transfer Size: 524288 00:11:58.201 Max Number of Namespaces: 256 00:11:58.201 Max Number of I/O Queues: 64 00:11:58.201 NVMe Specification Version (VS): 1.4 00:11:58.201 NVMe Specification Version (Identify): 1.4 00:11:58.201 Maximum Queue Entries: 2048 00:11:58.201 Contiguous Queues Required: Yes 00:11:58.201 Arbitration Mechanisms Supported 00:11:58.201 Weighted Round Robin: Not Supported 00:11:58.201 Vendor Specific: Not Supported 00:11:58.201 Reset Timeout: 7500 ms 00:11:58.201 Doorbell Stride: 4 bytes 00:11:58.201 NVM Subsystem Reset: Not Supported 00:11:58.201 Command Sets Supported 00:11:58.201 NVM Command Set: Supported 00:11:58.201 Boot Partition: Not Supported 00:11:58.201 Memory Page Size Minimum: 4096 bytes 00:11:58.201 Memory Page Size Maximum: 65536 bytes 00:11:58.201 Persistent Memory Region: Not Supported 00:11:58.201 Optional Asynchronous Events Supported 00:11:58.201 Namespace Attribute Notices: Supported 00:11:58.201 Firmware Activation Notices: Not Supported 00:11:58.201 ANA Change Notices: Not Supported 00:11:58.201 PLE Aggregate Log Change Notices: Not Supported 00:11:58.201 LBA Status Info Alert Notices: Not Supported 00:11:58.201 EGE Aggregate Log Change Notices: Not Supported 00:11:58.201 Normal NVM Subsystem Shutdown event: Not Supported 00:11:58.201 Zone Descriptor Change Notices: Not Supported 00:11:58.201 Discovery Log Change Notices: Not Supported 00:11:58.201 Controller Attributes 00:11:58.201 128-bit Host Identifier: Not Supported 00:11:58.201 Non-Operational Permissive Mode: Not Supported 00:11:58.201 NVM Sets: Not Supported 00:11:58.201 Read Recovery Levels: Not Supported 00:11:58.201 Endurance Groups: Supported 00:11:58.201 Predictable Latency Mode: Not Supported 00:11:58.201 Traffic Based Keep ALive: Not Supported 00:11:58.201 Namespace Granularity: Not Supported 00:11:58.201 SQ Associations: Not Supported 00:11:58.201 UUID List: Not Supported 00:11:58.201 Multi-Domain Subsystem: Not Supported 00:11:58.201 Fixed Capacity Management: Not Supported 00:11:58.201 Variable Capacity Management: Not Supported 00:11:58.201 Delete Endurance Group: Not Supported 00:11:58.201 Delete NVM Set: Not Supported 00:11:58.201 Extended LBA Formats Supported: Supported 00:11:58.201 Flexible Data Placement Supported: Supported 00:11:58.201 00:11:58.201 Controller Memory Buffer Support 00:11:58.201 ================================ 00:11:58.201 Supported: No 00:11:58.201 00:11:58.201 Persistent Memory Region Support 00:11:58.201 ================================ 00:11:58.201 Supported: No 00:11:58.201 00:11:58.201 Admin Command Set Attributes 00:11:58.201 ============================ 00:11:58.201 Security Send/Receive: Not Supported 00:11:58.201 Format NVM: Supported 00:11:58.201 Firmware Activate/Download: Not Supported 00:11:58.201 Namespace Management: Supported 00:11:58.201 Device Self-Test: Not Supported 00:11:58.201 Directives: Supported 00:11:58.201 NVMe-MI: Not Supported 00:11:58.201 Virtualization Management: Not Supported 00:11:58.201 Doorbell Buffer Config: Supported 00:11:58.201 Get LBA Status Capability: Not Supported 00:11:58.201 Command & Feature Lockdown Capability: Not Supported 00:11:58.201 Abort Command Limit: 4 00:11:58.201 Async Event Request Limit: 4 00:11:58.201 Number of Firmware Slots: N/A 00:11:58.201 Firmware Slot 1 Read-Only: N/A 00:11:58.201 Firmware Activation Without Reset: N/A 00:11:58.201 Multiple Update Detection Support: N/A 00:11:58.201 Firmware Update Granularity: No Information Provided 00:11:58.201 Per-Namespace SMART Log: Yes 00:11:58.201 Asymmetric Namespace Access Log Page: Not Supported 00:11:58.201 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:58.201 Command Effects Log Page: Supported 00:11:58.201 Get Log Page Extended Data: Supported 00:11:58.201 Telemetry Log Pages: Not Supported 00:11:58.201 Persistent Event Log Pages: Not Supported 00:11:58.201 Supported Log Pages Log Page: May Support 00:11:58.201 Commands Supported & Effects Log Page: Not Supported 00:11:58.201 Feature Identifiers & Effects Log Page:May Support 00:11:58.201 NVMe-MI Commands & Effects Log Page: May Support 00:11:58.201 Data Area 4 for Telemetry Log: Not Supported 00:11:58.201 Error Log Page Entries Supported: 1 00:11:58.201 Keep Alive: Not Supported 00:11:58.201 00:11:58.201 NVM Command Set Attributes 00:11:58.201 ========================== 00:11:58.201 Submission Queue Entry Size 00:11:58.201 Max: 64 00:11:58.201 Min: 64 00:11:58.201 Completion Queue Entry Size 00:11:58.201 Max: 16 00:11:58.201 Min: 16 00:11:58.201 Number of Namespaces: 256 00:11:58.201 Compare Command: Supported 00:11:58.201 Write Uncorrectable Command: Not Supported 00:11:58.201 Dataset Management Command: Supported 00:11:58.201 Write Zeroes Command: Supported 00:11:58.201 Set Features Save Field: Supported 00:11:58.201 Reservations: Not Supported 00:11:58.201 Timestamp: Supported 00:11:58.201 Copy: Supported 00:11:58.201 Volatile Write Cache: Present 00:11:58.201 Atomic Write Unit (Normal): 1 00:11:58.201 Atomic Write Unit (PFail): 1 00:11:58.201 Atomic Compare & Write Unit: 1 00:11:58.201 Fused Compare & Write: Not Supported 00:11:58.201 Scatter-Gather List 00:11:58.201 SGL Command Set: Supported 00:11:58.201 SGL Keyed: Not Supported 00:11:58.201 SGL Bit Bucket Descriptor: Not Supported 00:11:58.201 SGL Metadata Pointer: Not Supported 00:11:58.201 Oversized SGL: Not Supported 00:11:58.201 SGL Metadata Address: Not Supported 00:11:58.201 SGL Offset: Not Supported 00:11:58.201 Transport SGL Data Block: Not Supported 00:11:58.201 Replay Protected Memory Block: Not Supported 00:11:58.201 00:11:58.201 Firmware Slot Information 00:11:58.201 ========================= 00:11:58.201 Active slot: 1 00:11:58.201 Slot 1 Firmware Revision: 1.0 00:11:58.201 00:11:58.201 00:11:58.201 Commands Supported and Effects 00:11:58.201 ============================== 00:11:58.201 Admin Commands 00:11:58.201 -------------- 00:11:58.201 Delete I/O Submission Queue (00h): Supported 00:11:58.201 Create I/O Submission Queue (01h): Supported 00:11:58.201 Get Log Page (02h): Supported 00:11:58.201 Delete I/O Completion Queue (04h): Supported 00:11:58.201 Create I/O Completion Queue (05h): Supported 00:11:58.201 Identify (06h): Supported 00:11:58.201 Abort (08h): Supported 00:11:58.201 Set Features (09h): Supported 00:11:58.201 Get Features (0Ah): Supported 00:11:58.201 Asynchronous Event Request (0Ch): Supported 00:11:58.201 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:58.201 Directive Send (19h): Supported 00:11:58.201 Directive Receive (1Ah): Supported 00:11:58.201 Virtualization Management (1Ch): Supported 00:11:58.201 Doorbell Buffer Config (7Ch): Supported 00:11:58.201 Format NVM (80h): Supported LBA-Change 00:11:58.201 I/O Commands 00:11:58.201 ------------ 00:11:58.201 Flush (00h): Supported LBA-Change 00:11:58.201 Write (01h): Supported LBA-Change 00:11:58.201 Read (02h): Supported 00:11:58.201 Compare (05h): Supported 00:11:58.201 Write Zeroes (08h): Supported LBA-Change 00:11:58.201 Dataset Management (09h): Supported LBA-Change 00:11:58.201 Unknown (0Ch): Supported 00:11:58.201 Unknown (12h): Supported 00:11:58.201 Copy (19h): Supported LBA-Change 00:11:58.201 Unknown (1Dh): Supported LBA-Change 00:11:58.201 00:11:58.201 Error Log 00:11:58.201 ========= 00:11:58.201 00:11:58.201 Arbitration 00:11:58.201 =========== 00:11:58.201 Arbitration Burst: no limit 00:11:58.201 00:11:58.201 Power Management 00:11:58.201 ================ 00:11:58.201 Number of Power States: 1 00:11:58.202 Current Power State: Power State #0 00:11:58.202 Power State #0: 00:11:58.202 Max Power: 25.00 W 00:11:58.202 Non-Operational State: Operational 00:11:58.202 Entry Latency: 16 microseconds 00:11:58.202 Exit Latency: 4 microseconds 00:11:58.202 Relative Read Throughput: 0 00:11:58.202 Relative Read Latency: 0 00:11:58.202 Relative Write Throughput: 0 00:11:58.202 Relative Write Latency: 0 00:11:58.202 Idle Power: Not Reported 00:11:58.202 Active Power: Not Reported 00:11:58.202 Non-Operational Permissive Mode: Not Supported 00:11:58.202 00:11:58.202 Health Information 00:11:58.202 ================== 00:11:58.202 Critical Warnings: 00:11:58.202 Available Spare Space: OK 00:11:58.202 Temperature: OK 00:11:58.202 Device Reliability: OK 00:11:58.202 Read Only: No 00:11:58.202 Volatile Memory Backup: OK 00:11:58.202 Current Temperature: 323 Kelvin (50 Celsius) 00:11:58.202 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:58.202 Available Spare: 0% 00:11:58.202 Available Spare Threshold: 0% 00:11:58.202 Life Percentage Used: 0% 00:11:58.202 Data Units Read: 797 00:11:58.202 Data Units Written: 726 00:11:58.202 Host Read Commands: 33003 00:11:58.202 Host Write Commands: 32426 00:11:58.202 Controller Busy Time: 0 minutes 00:11:58.202 Power Cycles: 0 00:11:58.202 Power On Hours: 0 hours 00:11:58.202 Unsafe Shutdowns: 0 00:11:58.202 Unrecoverable Media Errors: 0 00:11:58.202 Lifetime Error Log Entries: 0 00:11:58.202 Warning Temperature Time: 0 minutes 00:11:58.202 Critical Temperature Time: 0 minutes 00:11:58.202 00:11:58.202 Number of Queues 00:11:58.202 ================ 00:11:58.202 Number of I/O Submission Queues: 64 00:11:58.202 Number of I/O Completion Queues: 64 00:11:58.202 00:11:58.202 ZNS Specific Controller Data 00:11:58.202 ============================ 00:11:58.202 Zone Append Size Limit: 0 00:11:58.202 00:11:58.202 00:11:58.202 Active Namespaces 00:11:58.202 ================= 00:11:58.202 Namespace ID:1 00:11:58.202 Error Recovery Timeout: Unlimited 00:11:58.202 Command Set Identifier: NVM (00h) 00:11:58.202 Deallocate: Supported 00:11:58.202 Deallocated/Unwritten Error: Supported 00:11:58.202 Deallocated Read Value: All 0x00 00:11:58.202 Deallocate in Write Zeroes: Not Supported 00:11:58.202 Deallocated Guard Field: 0xFFFF 00:11:58.202 Flush: Supported 00:11:58.202 Reservation: Not Supported 00:11:58.202 Namespace Sharing Capabilities: Multiple Controllers 00:11:58.202 Size (in LBAs): 262144 (1GiB) 00:11:58.202 Capacity (in LBAs): 262144 (1GiB) 00:11:58.202 Utilization (in LBAs): 262144 (1GiB) 00:11:58.202 Thin Provisioning: Not Supported 00:11:58.202 Per-NS Atomic Units: No 00:11:58.202 Maximum Single Source Range Length: 128 00:11:58.202 Maximum Copy Length: 128 00:11:58.202 Maximum Source Range Count: 128 00:11:58.202 NGUID/EUI64 Never Reused: No 00:11:58.202 Namespace Write Protected: No 00:11:58.202 Endurance group ID: 1 00:11:58.202 Number of LBA Formats: 8 00:11:58.202 Current LBA Format: LBA Format #04 00:11:58.202 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:58.202 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:58.202 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:58.202 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:58.202 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:58.202 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:58.202 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:58.202 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:58.202 00:11:58.202 Get Feature FDP: 00:11:58.202 ================ 00:11:58.202 Enabled: Yes 00:11:58.202 FDP configuration index: 0 00:11:58.202 00:11:58.202 FDP configurations log page 00:11:58.202 =========================== 00:11:58.202 Number of FDP configurations: 1 00:11:58.202 Version: 0 00:11:58.202 Size: 112 00:11:58.202 FDP Configuration Descriptor: 0 00:11:58.202 Descriptor Size: 96 00:11:58.202 Reclaim Group Identifier format: 2 00:11:58.202 FDP Volatile Write Cache: Not Present 00:11:58.202 FDP Configuration: Valid 00:11:58.202 Vendor Specific Size: 0 00:11:58.202 Number of Reclaim Groups: 2 00:11:58.202 Number of Recalim Unit Handles: 8 00:11:58.202 Max Placement Identifiers: 128 00:11:58.202 Number of Namespaces Suppprted: 256 00:11:58.202 Reclaim unit Nominal Size: 6000000 bytes 00:11:58.202 Estimated Reclaim Unit Time Limit: Not Reported 00:11:58.202 RUH Desc #000: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #001: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #002: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #003: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #004: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #005: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #006: RUH Type: Initially Isolated 00:11:58.202 RUH Desc #007: RUH Type: Initially Isolated 00:11:58.202 00:11:58.202 FDP reclaim unit handle usage log page 00:11:58.202 ====================================== 00:11:58.202 Number of Reclaim Unit Handles: 8 00:11:58.202 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:58.202 RUH Usage Desc #001: RUH Attributes: Unused 00:11:58.202 RUH Usage Desc #002: RUH Attributes: Unused 00:11:58.202 RUH Usage Desc #003: RUH Attributes: Unused 00:11:58.202 RUH Usage Desc #004: RUH Attributes: Unused 00:11:58.202 RUH Usage Desc #005: RUH Attributes: Unused 00:11:58.202 RUH Usage Desc #006: RUH Attributes: Unused 00:11:58.202 RUH Usage Desc #007: RUH Attributes: Unused 00:11:58.202 00:11:58.202 FDP statistics log page 00:11:58.202 ======================= 00:11:58.202 Host bytes with metadata written: 459513856 00:11:58.202 Media bytes with metadata written: 459579392 00:11:58.202 Media bytes erased: 0 00:11:58.202 00:11:58.202 FDP events log page 00:11:58.202 =================== 00:11:58.202 Number of FDP events: 0 00:11:58.202 00:11:58.202 NVM Specific Namespace Data 00:11:58.202 =========================== 00:11:58.202 Logical Block Storage Tag Mask: 0 00:11:58.202 Protection Information Capabilities: 00:11:58.202 16b Guard Protection Information Storage Tag Support: No 00:11:58.202 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:58.202 Storage Tag Check Read Support: No 00:11:58.202 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:58.202 ************************************ 00:11:58.202 END TEST nvme_identify 00:11:58.202 ************************************ 00:11:58.202 00:11:58.202 real 0m1.715s 00:11:58.202 user 0m0.695s 00:11:58.202 sys 0m0.823s 00:11:58.202 09:08:53 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.202 09:08:53 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:58.202 09:08:53 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:58.202 09:08:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.202 09:08:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.202 09:08:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.202 ************************************ 00:11:58.202 START TEST nvme_perf 00:11:58.202 ************************************ 00:11:58.202 09:08:53 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:58.202 09:08:53 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:59.579 Initializing NVMe Controllers 00:11:59.579 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:59.579 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:59.579 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:59.579 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:59.579 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:59.579 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:59.579 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:59.579 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:59.579 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:59.579 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:59.579 Initialization complete. Launching workers. 00:11:59.579 ======================================================== 00:11:59.579 Latency(us) 00:11:59.579 Device Information : IOPS MiB/s Average min max 00:11:59.579 PCIE (0000:00:10.0) NSID 1 from core 0: 12434.70 145.72 10311.84 8576.69 40207.16 00:11:59.579 PCIE (0000:00:11.0) NSID 1 from core 0: 12434.70 145.72 10292.45 8639.48 38203.74 00:11:59.579 PCIE (0000:00:13.0) NSID 1 from core 0: 12434.70 145.72 10271.76 8600.29 36182.63 00:11:59.579 PCIE (0000:00:12.0) NSID 1 from core 0: 12434.70 145.72 10250.88 8656.08 33835.79 00:11:59.579 PCIE (0000:00:12.0) NSID 2 from core 0: 12434.70 145.72 10229.84 8678.55 31655.35 00:11:59.579 PCIE (0000:00:12.0) NSID 3 from core 0: 12434.70 145.72 10208.13 8678.35 29353.22 00:11:59.579 ======================================================== 00:11:59.579 Total : 74608.20 874.31 10260.82 8576.69 40207.16 00:11:59.579 00:11:59.579 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:59.579 ================================================================================= 00:11:59.579 1.00000% : 8817.571us 00:11:59.579 10.00000% : 9175.040us 00:11:59.579 25.00000% : 9472.931us 00:11:59.579 50.00000% : 9830.400us 00:11:59.579 75.00000% : 10247.447us 00:11:59.579 90.00000% : 11260.276us 00:11:59.579 95.00000% : 13226.356us 00:11:59.579 98.00000% : 14000.873us 00:11:59.579 99.00000% : 30027.404us 00:11:59.579 99.50000% : 37891.724us 00:11:59.579 99.90000% : 39798.225us 00:11:59.579 99.99000% : 40274.851us 00:11:59.579 99.99900% : 40274.851us 00:11:59.579 99.99990% : 40274.851us 00:11:59.579 99.99999% : 40274.851us 00:11:59.579 00:11:59.579 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:59.579 ================================================================================= 00:11:59.579 1.00000% : 8877.149us 00:11:59.579 10.00000% : 9234.618us 00:11:59.579 25.00000% : 9532.509us 00:11:59.580 50.00000% : 9830.400us 00:11:59.580 75.00000% : 10247.447us 00:11:59.580 90.00000% : 11200.698us 00:11:59.580 95.00000% : 13166.778us 00:11:59.580 98.00000% : 13881.716us 00:11:59.580 99.00000% : 28359.215us 00:11:59.580 99.50000% : 35985.222us 00:11:59.580 99.90000% : 37891.724us 00:11:59.580 99.99000% : 38368.349us 00:11:59.580 99.99900% : 38368.349us 00:11:59.580 99.99990% : 38368.349us 00:11:59.580 99.99999% : 38368.349us 00:11:59.580 00:11:59.580 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:59.580 ================================================================================= 00:11:59.580 1.00000% : 8877.149us 00:11:59.580 10.00000% : 9234.618us 00:11:59.580 25.00000% : 9532.509us 00:11:59.580 50.00000% : 9830.400us 00:11:59.580 75.00000% : 10247.447us 00:11:59.580 90.00000% : 11260.276us 00:11:59.580 95.00000% : 13166.778us 00:11:59.580 98.00000% : 13762.560us 00:11:59.580 99.00000% : 26691.025us 00:11:59.580 99.50000% : 34317.033us 00:11:59.580 99.90000% : 35985.222us 00:11:59.580 99.99000% : 36223.535us 00:11:59.580 99.99900% : 36223.535us 00:11:59.580 99.99990% : 36223.535us 00:11:59.580 99.99999% : 36223.535us 00:11:59.580 00:11:59.580 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:59.580 ================================================================================= 00:11:59.580 1.00000% : 8877.149us 00:11:59.580 10.00000% : 9234.618us 00:11:59.580 25.00000% : 9532.509us 00:11:59.580 50.00000% : 9830.400us 00:11:59.580 75.00000% : 10247.447us 00:11:59.580 90.00000% : 11319.855us 00:11:59.580 95.00000% : 13166.778us 00:11:59.580 98.00000% : 13762.560us 00:11:59.580 99.00000% : 24665.367us 00:11:59.580 99.50000% : 31933.905us 00:11:59.580 99.90000% : 33602.095us 00:11:59.580 99.99000% : 33840.407us 00:11:59.580 99.99900% : 33840.407us 00:11:59.580 99.99990% : 33840.407us 00:11:59.580 99.99999% : 33840.407us 00:11:59.580 00:11:59.580 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:59.580 ================================================================================= 00:11:59.580 1.00000% : 8877.149us 00:11:59.580 10.00000% : 9234.618us 00:11:59.580 25.00000% : 9532.509us 00:11:59.580 50.00000% : 9830.400us 00:11:59.580 75.00000% : 10247.447us 00:11:59.580 90.00000% : 11200.698us 00:11:59.580 95.00000% : 13166.778us 00:11:59.580 98.00000% : 13762.560us 00:11:59.580 99.00000% : 22639.709us 00:11:59.580 99.50000% : 29789.091us 00:11:59.580 99.90000% : 31457.280us 00:11:59.580 99.99000% : 31695.593us 00:11:59.580 99.99900% : 31695.593us 00:11:59.580 99.99990% : 31695.593us 00:11:59.580 99.99999% : 31695.593us 00:11:59.580 00:11:59.580 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:59.580 ================================================================================= 00:11:59.580 1.00000% : 8877.149us 00:11:59.580 10.00000% : 9234.618us 00:11:59.580 25.00000% : 9532.509us 00:11:59.580 50.00000% : 9830.400us 00:11:59.580 75.00000% : 10247.447us 00:11:59.580 90.00000% : 11141.120us 00:11:59.580 95.00000% : 13226.356us 00:11:59.580 98.00000% : 13822.138us 00:11:59.580 99.00000% : 20614.051us 00:11:59.580 99.50000% : 27405.964us 00:11:59.580 99.90000% : 29074.153us 00:11:59.580 99.99000% : 29431.622us 00:11:59.580 99.99900% : 29431.622us 00:11:59.580 99.99990% : 29431.622us 00:11:59.580 99.99999% : 29431.622us 00:11:59.580 00:11:59.580 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:59.580 ============================================================================== 00:11:59.580 Range in us Cumulative IO count 00:11:59.580 8519.680 - 8579.258: 0.0080% ( 1) 00:11:59.580 8579.258 - 8638.836: 0.0962% ( 11) 00:11:59.580 8638.836 - 8698.415: 0.4167% ( 40) 00:11:59.580 8698.415 - 8757.993: 0.8253% ( 51) 00:11:59.580 8757.993 - 8817.571: 1.4183% ( 74) 00:11:59.580 8817.571 - 8877.149: 2.2276% ( 101) 00:11:59.580 8877.149 - 8936.727: 3.2612% ( 129) 00:11:59.580 8936.727 - 8996.305: 4.7756% ( 189) 00:11:59.580 8996.305 - 9055.884: 6.5224% ( 218) 00:11:59.580 9055.884 - 9115.462: 8.5978% ( 259) 00:11:59.580 9115.462 - 9175.040: 11.0256% ( 303) 00:11:59.580 9175.040 - 9234.618: 13.5978% ( 321) 00:11:59.580 9234.618 - 9294.196: 16.4824% ( 360) 00:11:59.580 9294.196 - 9353.775: 19.5913% ( 388) 00:11:59.580 9353.775 - 9413.353: 23.0769% ( 435) 00:11:59.580 9413.353 - 9472.931: 26.8029% ( 465) 00:11:59.580 9472.931 - 9532.509: 30.6490% ( 480) 00:11:59.580 9532.509 - 9592.087: 34.6314% ( 497) 00:11:59.580 9592.087 - 9651.665: 38.6298% ( 499) 00:11:59.580 9651.665 - 9711.244: 42.6683% ( 504) 00:11:59.580 9711.244 - 9770.822: 46.8349% ( 520) 00:11:59.580 9770.822 - 9830.400: 50.7212% ( 485) 00:11:59.580 9830.400 - 9889.978: 54.6955% ( 496) 00:11:59.580 9889.978 - 9949.556: 58.6298% ( 491) 00:11:59.580 9949.556 - 10009.135: 62.2756% ( 455) 00:11:59.580 10009.135 - 10068.713: 65.8413% ( 445) 00:11:59.580 10068.713 - 10128.291: 69.1106% ( 408) 00:11:59.580 10128.291 - 10187.869: 72.0593% ( 368) 00:11:59.580 10187.869 - 10247.447: 75.0000% ( 367) 00:11:59.580 10247.447 - 10307.025: 77.4519% ( 306) 00:11:59.580 10307.025 - 10366.604: 79.6635% ( 276) 00:11:59.580 10366.604 - 10426.182: 81.5705% ( 238) 00:11:59.580 10426.182 - 10485.760: 83.3173% ( 218) 00:11:59.580 10485.760 - 10545.338: 84.8077% ( 186) 00:11:59.580 10545.338 - 10604.916: 85.9615% ( 144) 00:11:59.580 10604.916 - 10664.495: 86.7708% ( 101) 00:11:59.580 10664.495 - 10724.073: 87.4519% ( 85) 00:11:59.580 10724.073 - 10783.651: 88.1571% ( 88) 00:11:59.580 10783.651 - 10843.229: 88.6619% ( 63) 00:11:59.580 10843.229 - 10902.807: 89.0385% ( 47) 00:11:59.580 10902.807 - 10962.385: 89.2869% ( 31) 00:11:59.580 10962.385 - 11021.964: 89.5433% ( 32) 00:11:59.580 11021.964 - 11081.542: 89.7516% ( 26) 00:11:59.580 11081.542 - 11141.120: 89.8638% ( 14) 00:11:59.580 11141.120 - 11200.698: 89.9760% ( 14) 00:11:59.580 11200.698 - 11260.276: 90.0881% ( 14) 00:11:59.580 11260.276 - 11319.855: 90.1522% ( 8) 00:11:59.580 11319.855 - 11379.433: 90.2244% ( 9) 00:11:59.580 11379.433 - 11439.011: 90.2724% ( 6) 00:11:59.580 11439.011 - 11498.589: 90.3365% ( 8) 00:11:59.580 11498.589 - 11558.167: 90.4006% ( 8) 00:11:59.580 11558.167 - 11617.745: 90.4567% ( 7) 00:11:59.580 11617.745 - 11677.324: 90.5369% ( 10) 00:11:59.580 11677.324 - 11736.902: 90.5929% ( 7) 00:11:59.580 11736.902 - 11796.480: 90.6891% ( 12) 00:11:59.580 11796.480 - 11856.058: 90.7772% ( 11) 00:11:59.580 11856.058 - 11915.636: 90.8654% ( 11) 00:11:59.580 11915.636 - 11975.215: 90.9375% ( 9) 00:11:59.580 11975.215 - 12034.793: 91.0337% ( 12) 00:11:59.580 12034.793 - 12094.371: 91.1458% ( 14) 00:11:59.580 12094.371 - 12153.949: 91.2500% ( 13) 00:11:59.580 12153.949 - 12213.527: 91.3702% ( 15) 00:11:59.580 12213.527 - 12273.105: 91.4904% ( 15) 00:11:59.580 12273.105 - 12332.684: 91.6186% ( 16) 00:11:59.580 12332.684 - 12392.262: 91.7628% ( 18) 00:11:59.580 12392.262 - 12451.840: 91.9391% ( 22) 00:11:59.580 12451.840 - 12511.418: 92.1154% ( 22) 00:11:59.580 12511.418 - 12570.996: 92.3478% ( 29) 00:11:59.580 12570.996 - 12630.575: 92.5962% ( 31) 00:11:59.580 12630.575 - 12690.153: 92.7724% ( 22) 00:11:59.580 12690.153 - 12749.731: 93.0288% ( 32) 00:11:59.580 12749.731 - 12809.309: 93.3333% ( 38) 00:11:59.580 12809.309 - 12868.887: 93.6378% ( 38) 00:11:59.580 12868.887 - 12928.465: 93.8542% ( 27) 00:11:59.580 12928.465 - 12988.044: 94.1426% ( 36) 00:11:59.580 12988.044 - 13047.622: 94.4231% ( 35) 00:11:59.580 13047.622 - 13107.200: 94.7035% ( 35) 00:11:59.580 13107.200 - 13166.778: 94.9840% ( 35) 00:11:59.580 13166.778 - 13226.356: 95.3205% ( 42) 00:11:59.580 13226.356 - 13285.935: 95.6170% ( 37) 00:11:59.580 13285.935 - 13345.513: 95.8494% ( 29) 00:11:59.580 13345.513 - 13405.091: 96.1458% ( 37) 00:11:59.580 13405.091 - 13464.669: 96.4022% ( 32) 00:11:59.580 13464.669 - 13524.247: 96.6026% ( 25) 00:11:59.580 13524.247 - 13583.825: 96.9071% ( 38) 00:11:59.580 13583.825 - 13643.404: 97.0994% ( 24) 00:11:59.580 13643.404 - 13702.982: 97.2837% ( 23) 00:11:59.580 13702.982 - 13762.560: 97.4679% ( 23) 00:11:59.580 13762.560 - 13822.138: 97.6202% ( 19) 00:11:59.580 13822.138 - 13881.716: 97.7804% ( 20) 00:11:59.580 13881.716 - 13941.295: 97.9087% ( 16) 00:11:59.580 13941.295 - 14000.873: 98.0769% ( 21) 00:11:59.580 14000.873 - 14060.451: 98.2292% ( 19) 00:11:59.580 14060.451 - 14120.029: 98.3574% ( 16) 00:11:59.580 14120.029 - 14179.607: 98.4615% ( 13) 00:11:59.580 14179.607 - 14239.185: 98.5497% ( 11) 00:11:59.580 14239.185 - 14298.764: 98.6538% ( 13) 00:11:59.580 14298.764 - 14358.342: 98.7340% ( 10) 00:11:59.580 14358.342 - 14417.920: 98.8061% ( 9) 00:11:59.580 14417.920 - 14477.498: 98.8381% ( 4) 00:11:59.580 14477.498 - 14537.076: 98.8702% ( 4) 00:11:59.580 14537.076 - 14596.655: 98.8942% ( 3) 00:11:59.580 14596.655 - 14656.233: 98.9183% ( 3) 00:11:59.580 14656.233 - 14715.811: 98.9343% ( 2) 00:11:59.580 14715.811 - 14775.389: 98.9744% ( 5) 00:11:59.580 29669.935 - 29789.091: 98.9824% ( 1) 00:11:59.580 29908.247 - 30027.404: 99.0064% ( 3) 00:11:59.580 30027.404 - 30146.560: 99.0304% ( 3) 00:11:59.580 30146.560 - 30265.716: 99.0385% ( 1) 00:11:59.580 30265.716 - 30384.873: 99.0705% ( 4) 00:11:59.580 30384.873 - 30504.029: 99.0946% ( 3) 00:11:59.580 30504.029 - 30742.342: 99.1426% ( 6) 00:11:59.580 30742.342 - 30980.655: 99.1987% ( 7) 00:11:59.580 30980.655 - 31218.967: 99.2388% ( 5) 00:11:59.581 31218.967 - 31457.280: 99.2949% ( 7) 00:11:59.581 31457.280 - 31695.593: 99.3429% ( 6) 00:11:59.581 31695.593 - 31933.905: 99.3910% ( 6) 00:11:59.581 31933.905 - 32172.218: 99.4311% ( 5) 00:11:59.581 32172.218 - 32410.531: 99.4792% ( 6) 00:11:59.581 32410.531 - 32648.844: 99.4872% ( 1) 00:11:59.581 37653.411 - 37891.724: 99.5192% ( 4) 00:11:59.581 37891.724 - 38130.036: 99.5673% ( 6) 00:11:59.581 38130.036 - 38368.349: 99.6234% ( 7) 00:11:59.581 38368.349 - 38606.662: 99.6715% ( 6) 00:11:59.581 38606.662 - 38844.975: 99.7196% ( 6) 00:11:59.581 38844.975 - 39083.287: 99.7596% ( 5) 00:11:59.581 39083.287 - 39321.600: 99.8077% ( 6) 00:11:59.581 39321.600 - 39559.913: 99.8718% ( 8) 00:11:59.581 39559.913 - 39798.225: 99.9199% ( 6) 00:11:59.581 39798.225 - 40036.538: 99.9599% ( 5) 00:11:59.581 40036.538 - 40274.851: 100.0000% ( 5) 00:11:59.581 00:11:59.581 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:59.581 ============================================================================== 00:11:59.581 Range in us Cumulative IO count 00:11:59.581 8638.836 - 8698.415: 0.0401% ( 5) 00:11:59.581 8698.415 - 8757.993: 0.2324% ( 24) 00:11:59.581 8757.993 - 8817.571: 0.6010% ( 46) 00:11:59.581 8817.571 - 8877.149: 1.1298% ( 66) 00:11:59.581 8877.149 - 8936.727: 1.8830% ( 94) 00:11:59.581 8936.727 - 8996.305: 2.9167% ( 129) 00:11:59.581 8996.305 - 9055.884: 4.4712% ( 194) 00:11:59.581 9055.884 - 9115.462: 6.3141% ( 230) 00:11:59.581 9115.462 - 9175.040: 8.6298% ( 289) 00:11:59.581 9175.040 - 9234.618: 11.2981% ( 333) 00:11:59.581 9234.618 - 9294.196: 14.2708% ( 371) 00:11:59.581 9294.196 - 9353.775: 17.3798% ( 388) 00:11:59.581 9353.775 - 9413.353: 20.8013% ( 427) 00:11:59.581 9413.353 - 9472.931: 24.7196% ( 489) 00:11:59.581 9472.931 - 9532.509: 28.8381% ( 514) 00:11:59.581 9532.509 - 9592.087: 33.1571% ( 539) 00:11:59.581 9592.087 - 9651.665: 37.7724% ( 576) 00:11:59.581 9651.665 - 9711.244: 42.2837% ( 563) 00:11:59.581 9711.244 - 9770.822: 46.9391% ( 581) 00:11:59.581 9770.822 - 9830.400: 51.6426% ( 587) 00:11:59.581 9830.400 - 9889.978: 55.9936% ( 543) 00:11:59.581 9889.978 - 9949.556: 60.1202% ( 515) 00:11:59.581 9949.556 - 10009.135: 64.2869% ( 520) 00:11:59.581 10009.135 - 10068.713: 68.0529% ( 470) 00:11:59.581 10068.713 - 10128.291: 71.3942% ( 417) 00:11:59.581 10128.291 - 10187.869: 74.3349% ( 367) 00:11:59.581 10187.869 - 10247.447: 76.9231% ( 323) 00:11:59.581 10247.447 - 10307.025: 79.3029% ( 297) 00:11:59.581 10307.025 - 10366.604: 81.4744% ( 271) 00:11:59.581 10366.604 - 10426.182: 83.3574% ( 235) 00:11:59.581 10426.182 - 10485.760: 84.8718% ( 189) 00:11:59.581 10485.760 - 10545.338: 86.0256% ( 144) 00:11:59.581 10545.338 - 10604.916: 86.8830% ( 107) 00:11:59.581 10604.916 - 10664.495: 87.5240% ( 80) 00:11:59.581 10664.495 - 10724.073: 88.0929% ( 71) 00:11:59.581 10724.073 - 10783.651: 88.5417% ( 56) 00:11:59.581 10783.651 - 10843.229: 88.9183% ( 47) 00:11:59.581 10843.229 - 10902.807: 89.2308% ( 39) 00:11:59.581 10902.807 - 10962.385: 89.4712% ( 30) 00:11:59.581 10962.385 - 11021.964: 89.6795% ( 26) 00:11:59.581 11021.964 - 11081.542: 89.8397% ( 20) 00:11:59.581 11081.542 - 11141.120: 89.9279% ( 11) 00:11:59.581 11141.120 - 11200.698: 90.0080% ( 10) 00:11:59.581 11200.698 - 11260.276: 90.0561% ( 6) 00:11:59.581 11260.276 - 11319.855: 90.1042% ( 6) 00:11:59.581 11319.855 - 11379.433: 90.1442% ( 5) 00:11:59.581 11379.433 - 11439.011: 90.1763% ( 4) 00:11:59.581 11439.011 - 11498.589: 90.2484% ( 9) 00:11:59.581 11498.589 - 11558.167: 90.3045% ( 7) 00:11:59.581 11558.167 - 11617.745: 90.3526% ( 6) 00:11:59.581 11617.745 - 11677.324: 90.4247% ( 9) 00:11:59.581 11677.324 - 11736.902: 90.5128% ( 11) 00:11:59.581 11736.902 - 11796.480: 90.6170% ( 13) 00:11:59.581 11796.480 - 11856.058: 90.7131% ( 12) 00:11:59.581 11856.058 - 11915.636: 90.7772% ( 8) 00:11:59.581 11915.636 - 11975.215: 90.8494% ( 9) 00:11:59.581 11975.215 - 12034.793: 90.9295% ( 10) 00:11:59.581 12034.793 - 12094.371: 91.0657% ( 17) 00:11:59.581 12094.371 - 12153.949: 91.1779% ( 14) 00:11:59.581 12153.949 - 12213.527: 91.3301% ( 19) 00:11:59.581 12213.527 - 12273.105: 91.4744% ( 18) 00:11:59.581 12273.105 - 12332.684: 91.6587% ( 23) 00:11:59.581 12332.684 - 12392.262: 91.8269% ( 21) 00:11:59.581 12392.262 - 12451.840: 91.9792% ( 19) 00:11:59.581 12451.840 - 12511.418: 92.1635% ( 23) 00:11:59.581 12511.418 - 12570.996: 92.3478% ( 23) 00:11:59.581 12570.996 - 12630.575: 92.5401% ( 24) 00:11:59.581 12630.575 - 12690.153: 92.7724% ( 29) 00:11:59.581 12690.153 - 12749.731: 93.0208% ( 31) 00:11:59.581 12749.731 - 12809.309: 93.2692% ( 31) 00:11:59.581 12809.309 - 12868.887: 93.5497% ( 35) 00:11:59.581 12868.887 - 12928.465: 93.8542% ( 38) 00:11:59.581 12928.465 - 12988.044: 94.1667% ( 39) 00:11:59.581 12988.044 - 13047.622: 94.4872% ( 40) 00:11:59.581 13047.622 - 13107.200: 94.7997% ( 39) 00:11:59.581 13107.200 - 13166.778: 95.0721% ( 34) 00:11:59.581 13166.778 - 13226.356: 95.3526% ( 35) 00:11:59.581 13226.356 - 13285.935: 95.6170% ( 33) 00:11:59.581 13285.935 - 13345.513: 95.9054% ( 36) 00:11:59.581 13345.513 - 13405.091: 96.1939% ( 36) 00:11:59.581 13405.091 - 13464.669: 96.4583% ( 33) 00:11:59.581 13464.669 - 13524.247: 96.7788% ( 40) 00:11:59.581 13524.247 - 13583.825: 97.0272% ( 31) 00:11:59.581 13583.825 - 13643.404: 97.2436% ( 27) 00:11:59.581 13643.404 - 13702.982: 97.4359% ( 24) 00:11:59.581 13702.982 - 13762.560: 97.6442% ( 26) 00:11:59.581 13762.560 - 13822.138: 97.8526% ( 26) 00:11:59.581 13822.138 - 13881.716: 98.0208% ( 21) 00:11:59.581 13881.716 - 13941.295: 98.1571% ( 17) 00:11:59.581 13941.295 - 14000.873: 98.2853% ( 16) 00:11:59.581 14000.873 - 14060.451: 98.4135% ( 16) 00:11:59.581 14060.451 - 14120.029: 98.5176% ( 13) 00:11:59.581 14120.029 - 14179.607: 98.6058% ( 11) 00:11:59.581 14179.607 - 14239.185: 98.7179% ( 14) 00:11:59.581 14239.185 - 14298.764: 98.7981% ( 10) 00:11:59.581 14298.764 - 14358.342: 98.8782% ( 10) 00:11:59.581 14358.342 - 14417.920: 98.9183% ( 5) 00:11:59.581 14417.920 - 14477.498: 98.9503% ( 4) 00:11:59.581 14477.498 - 14537.076: 98.9744% ( 3) 00:11:59.581 28120.902 - 28240.058: 98.9984% ( 3) 00:11:59.581 28240.058 - 28359.215: 99.0304% ( 4) 00:11:59.581 28359.215 - 28478.371: 99.0465% ( 2) 00:11:59.581 28478.371 - 28597.527: 99.0705% ( 3) 00:11:59.581 28597.527 - 28716.684: 99.1026% ( 4) 00:11:59.581 28716.684 - 28835.840: 99.1266% ( 3) 00:11:59.581 28835.840 - 28954.996: 99.1587% ( 4) 00:11:59.581 28954.996 - 29074.153: 99.1827% ( 3) 00:11:59.581 29074.153 - 29193.309: 99.2147% ( 4) 00:11:59.581 29193.309 - 29312.465: 99.2388% ( 3) 00:11:59.581 29312.465 - 29431.622: 99.2708% ( 4) 00:11:59.581 29431.622 - 29550.778: 99.2949% ( 3) 00:11:59.581 29550.778 - 29669.935: 99.3189% ( 3) 00:11:59.581 29669.935 - 29789.091: 99.3429% ( 3) 00:11:59.581 29789.091 - 29908.247: 99.3750% ( 4) 00:11:59.581 30027.404 - 30146.560: 99.4071% ( 4) 00:11:59.581 30146.560 - 30265.716: 99.4311% ( 3) 00:11:59.581 30265.716 - 30384.873: 99.4551% ( 3) 00:11:59.581 30384.873 - 30504.029: 99.4872% ( 4) 00:11:59.581 35746.909 - 35985.222: 99.5112% ( 3) 00:11:59.581 35985.222 - 36223.535: 99.5593% ( 6) 00:11:59.581 36223.535 - 36461.847: 99.6154% ( 7) 00:11:59.581 36461.847 - 36700.160: 99.6715% ( 7) 00:11:59.581 36700.160 - 36938.473: 99.7196% ( 6) 00:11:59.581 36938.473 - 37176.785: 99.7676% ( 6) 00:11:59.581 37176.785 - 37415.098: 99.8237% ( 7) 00:11:59.581 37415.098 - 37653.411: 99.8798% ( 7) 00:11:59.581 37653.411 - 37891.724: 99.9279% ( 6) 00:11:59.581 37891.724 - 38130.036: 99.9760% ( 6) 00:11:59.581 38130.036 - 38368.349: 100.0000% ( 3) 00:11:59.581 00:11:59.581 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:59.581 ============================================================================== 00:11:59.581 Range in us Cumulative IO count 00:11:59.581 8579.258 - 8638.836: 0.0481% ( 6) 00:11:59.581 8638.836 - 8698.415: 0.2244% ( 22) 00:11:59.581 8698.415 - 8757.993: 0.5208% ( 37) 00:11:59.581 8757.993 - 8817.571: 0.9776% ( 57) 00:11:59.581 8817.571 - 8877.149: 1.6587% ( 85) 00:11:59.581 8877.149 - 8936.727: 2.4279% ( 96) 00:11:59.581 8936.727 - 8996.305: 3.6138% ( 148) 00:11:59.581 8996.305 - 9055.884: 5.0401% ( 178) 00:11:59.581 9055.884 - 9115.462: 6.8830% ( 230) 00:11:59.581 9115.462 - 9175.040: 8.9022% ( 252) 00:11:59.581 9175.040 - 9234.618: 11.4263% ( 315) 00:11:59.581 9234.618 - 9294.196: 14.1827% ( 344) 00:11:59.581 9294.196 - 9353.775: 17.4439% ( 407) 00:11:59.581 9353.775 - 9413.353: 20.9535% ( 438) 00:11:59.581 9413.353 - 9472.931: 24.8798% ( 490) 00:11:59.581 9472.931 - 9532.509: 29.2147% ( 541) 00:11:59.581 9532.509 - 9592.087: 33.5657% ( 543) 00:11:59.581 9592.087 - 9651.665: 38.1170% ( 568) 00:11:59.581 9651.665 - 9711.244: 42.7163% ( 574) 00:11:59.581 9711.244 - 9770.822: 47.3558% ( 579) 00:11:59.581 9770.822 - 9830.400: 51.6506% ( 536) 00:11:59.581 9830.400 - 9889.978: 55.8574% ( 525) 00:11:59.581 9889.978 - 9949.556: 59.9519% ( 511) 00:11:59.581 9949.556 - 10009.135: 63.8061% ( 481) 00:11:59.581 10009.135 - 10068.713: 67.3558% ( 443) 00:11:59.581 10068.713 - 10128.291: 70.5769% ( 402) 00:11:59.581 10128.291 - 10187.869: 73.4936% ( 364) 00:11:59.581 10187.869 - 10247.447: 76.2821% ( 348) 00:11:59.581 10247.447 - 10307.025: 78.8301% ( 318) 00:11:59.581 10307.025 - 10366.604: 81.0337% ( 275) 00:11:59.581 10366.604 - 10426.182: 82.7324% ( 212) 00:11:59.581 10426.182 - 10485.760: 84.1026% ( 171) 00:11:59.581 10485.760 - 10545.338: 85.3125% ( 151) 00:11:59.582 10545.338 - 10604.916: 86.3542% ( 130) 00:11:59.582 10604.916 - 10664.495: 87.1795% ( 103) 00:11:59.582 10664.495 - 10724.073: 87.8045% ( 78) 00:11:59.582 10724.073 - 10783.651: 88.3253% ( 65) 00:11:59.582 10783.651 - 10843.229: 88.7019% ( 47) 00:11:59.582 10843.229 - 10902.807: 89.0465% ( 43) 00:11:59.582 10902.807 - 10962.385: 89.3269% ( 35) 00:11:59.582 10962.385 - 11021.964: 89.5192% ( 24) 00:11:59.582 11021.964 - 11081.542: 89.7436% ( 28) 00:11:59.582 11081.542 - 11141.120: 89.8718% ( 16) 00:11:59.582 11141.120 - 11200.698: 89.9760% ( 13) 00:11:59.582 11200.698 - 11260.276: 90.0881% ( 14) 00:11:59.582 11260.276 - 11319.855: 90.1923% ( 13) 00:11:59.582 11319.855 - 11379.433: 90.2564% ( 8) 00:11:59.582 11379.433 - 11439.011: 90.3365% ( 10) 00:11:59.582 11439.011 - 11498.589: 90.4167% ( 10) 00:11:59.582 11498.589 - 11558.167: 90.4888% ( 9) 00:11:59.582 11558.167 - 11617.745: 90.5449% ( 7) 00:11:59.582 11617.745 - 11677.324: 90.6010% ( 7) 00:11:59.582 11677.324 - 11736.902: 90.6571% ( 7) 00:11:59.582 11736.902 - 11796.480: 90.7051% ( 6) 00:11:59.582 11796.480 - 11856.058: 90.7532% ( 6) 00:11:59.582 11856.058 - 11915.636: 90.8494% ( 12) 00:11:59.582 11915.636 - 11975.215: 90.9295% ( 10) 00:11:59.582 11975.215 - 12034.793: 91.0016% ( 9) 00:11:59.582 12034.793 - 12094.371: 91.1138% ( 14) 00:11:59.582 12094.371 - 12153.949: 91.2179% ( 13) 00:11:59.582 12153.949 - 12213.527: 91.3381% ( 15) 00:11:59.582 12213.527 - 12273.105: 91.4904% ( 19) 00:11:59.582 12273.105 - 12332.684: 91.6506% ( 20) 00:11:59.582 12332.684 - 12392.262: 91.8029% ( 19) 00:11:59.582 12392.262 - 12451.840: 91.9712% ( 21) 00:11:59.582 12451.840 - 12511.418: 92.1554% ( 23) 00:11:59.582 12511.418 - 12570.996: 92.3638% ( 26) 00:11:59.582 12570.996 - 12630.575: 92.5401% ( 22) 00:11:59.582 12630.575 - 12690.153: 92.7163% ( 22) 00:11:59.582 12690.153 - 12749.731: 92.9647% ( 31) 00:11:59.582 12749.731 - 12809.309: 93.2292% ( 33) 00:11:59.582 12809.309 - 12868.887: 93.5577% ( 41) 00:11:59.582 12868.887 - 12928.465: 93.7901% ( 29) 00:11:59.582 12928.465 - 12988.044: 94.0385% ( 31) 00:11:59.582 12988.044 - 13047.622: 94.3830% ( 43) 00:11:59.582 13047.622 - 13107.200: 94.7436% ( 45) 00:11:59.582 13107.200 - 13166.778: 95.0881% ( 43) 00:11:59.582 13166.778 - 13226.356: 95.4327% ( 43) 00:11:59.582 13226.356 - 13285.935: 95.8013% ( 46) 00:11:59.582 13285.935 - 13345.513: 96.1378% ( 42) 00:11:59.582 13345.513 - 13405.091: 96.4744% ( 42) 00:11:59.582 13405.091 - 13464.669: 96.8029% ( 41) 00:11:59.582 13464.669 - 13524.247: 97.1314% ( 41) 00:11:59.582 13524.247 - 13583.825: 97.4279% ( 37) 00:11:59.582 13583.825 - 13643.404: 97.6843% ( 32) 00:11:59.582 13643.404 - 13702.982: 97.9247% ( 30) 00:11:59.582 13702.982 - 13762.560: 98.0929% ( 21) 00:11:59.582 13762.560 - 13822.138: 98.2692% ( 22) 00:11:59.582 13822.138 - 13881.716: 98.4135% ( 18) 00:11:59.582 13881.716 - 13941.295: 98.5817% ( 21) 00:11:59.582 13941.295 - 14000.873: 98.7179% ( 17) 00:11:59.582 14000.873 - 14060.451: 98.8221% ( 13) 00:11:59.582 14060.451 - 14120.029: 98.8862% ( 8) 00:11:59.582 14120.029 - 14179.607: 98.9423% ( 7) 00:11:59.582 14179.607 - 14239.185: 98.9744% ( 4) 00:11:59.582 26452.713 - 26571.869: 98.9824% ( 1) 00:11:59.582 26571.869 - 26691.025: 99.0064% ( 3) 00:11:59.582 26691.025 - 26810.182: 99.0304% ( 3) 00:11:59.582 26810.182 - 26929.338: 99.0545% ( 3) 00:11:59.582 26929.338 - 27048.495: 99.0785% ( 3) 00:11:59.582 27048.495 - 27167.651: 99.1026% ( 3) 00:11:59.582 27167.651 - 27286.807: 99.1346% ( 4) 00:11:59.582 27286.807 - 27405.964: 99.1587% ( 3) 00:11:59.582 27405.964 - 27525.120: 99.1827% ( 3) 00:11:59.582 27525.120 - 27644.276: 99.2067% ( 3) 00:11:59.582 27644.276 - 27763.433: 99.2388% ( 4) 00:11:59.582 27763.433 - 27882.589: 99.2628% ( 3) 00:11:59.582 27882.589 - 28001.745: 99.2949% ( 4) 00:11:59.582 28001.745 - 28120.902: 99.3189% ( 3) 00:11:59.582 28120.902 - 28240.058: 99.3510% ( 4) 00:11:59.582 28240.058 - 28359.215: 99.3750% ( 3) 00:11:59.582 28359.215 - 28478.371: 99.3990% ( 3) 00:11:59.582 28478.371 - 28597.527: 99.4311% ( 4) 00:11:59.582 28597.527 - 28716.684: 99.4551% ( 3) 00:11:59.582 28716.684 - 28835.840: 99.4792% ( 3) 00:11:59.582 28835.840 - 28954.996: 99.4872% ( 1) 00:11:59.582 34078.720 - 34317.033: 99.5353% ( 6) 00:11:59.582 34317.033 - 34555.345: 99.5913% ( 7) 00:11:59.582 34555.345 - 34793.658: 99.6394% ( 6) 00:11:59.582 34793.658 - 35031.971: 99.7035% ( 8) 00:11:59.582 35031.971 - 35270.284: 99.7596% ( 7) 00:11:59.582 35270.284 - 35508.596: 99.8157% ( 7) 00:11:59.582 35508.596 - 35746.909: 99.8798% ( 8) 00:11:59.582 35746.909 - 35985.222: 99.9439% ( 8) 00:11:59.582 35985.222 - 36223.535: 100.0000% ( 7) 00:11:59.582 00:11:59.582 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:59.582 ============================================================================== 00:11:59.582 Range in us Cumulative IO count 00:11:59.582 8638.836 - 8698.415: 0.0401% ( 5) 00:11:59.582 8698.415 - 8757.993: 0.2163% ( 22) 00:11:59.582 8757.993 - 8817.571: 0.6490% ( 54) 00:11:59.582 8817.571 - 8877.149: 1.3782% ( 91) 00:11:59.582 8877.149 - 8936.727: 2.2997% ( 115) 00:11:59.582 8936.727 - 8996.305: 3.3574% ( 132) 00:11:59.582 8996.305 - 9055.884: 4.6875% ( 166) 00:11:59.582 9055.884 - 9115.462: 6.3622% ( 209) 00:11:59.582 9115.462 - 9175.040: 8.5657% ( 275) 00:11:59.582 9175.040 - 9234.618: 11.0897% ( 315) 00:11:59.582 9234.618 - 9294.196: 13.9824% ( 361) 00:11:59.582 9294.196 - 9353.775: 17.1474% ( 395) 00:11:59.582 9353.775 - 9413.353: 20.8654% ( 464) 00:11:59.582 9413.353 - 9472.931: 24.7837% ( 489) 00:11:59.582 9472.931 - 9532.509: 29.0144% ( 528) 00:11:59.582 9532.509 - 9592.087: 33.2212% ( 525) 00:11:59.582 9592.087 - 9651.665: 37.8125% ( 573) 00:11:59.582 9651.665 - 9711.244: 42.4359% ( 577) 00:11:59.582 9711.244 - 9770.822: 47.0593% ( 577) 00:11:59.582 9770.822 - 9830.400: 51.4824% ( 552) 00:11:59.582 9830.400 - 9889.978: 55.8333% ( 543) 00:11:59.582 9889.978 - 9949.556: 60.0401% ( 525) 00:11:59.582 9949.556 - 10009.135: 63.9103% ( 483) 00:11:59.582 10009.135 - 10068.713: 67.5962% ( 460) 00:11:59.582 10068.713 - 10128.291: 70.8734% ( 409) 00:11:59.582 10128.291 - 10187.869: 73.9343% ( 382) 00:11:59.582 10187.869 - 10247.447: 76.7468% ( 351) 00:11:59.582 10247.447 - 10307.025: 79.2468% ( 312) 00:11:59.582 10307.025 - 10366.604: 81.2740% ( 253) 00:11:59.582 10366.604 - 10426.182: 83.0128% ( 217) 00:11:59.582 10426.182 - 10485.760: 84.3830% ( 171) 00:11:59.582 10485.760 - 10545.338: 85.4888% ( 138) 00:11:59.582 10545.338 - 10604.916: 86.3782% ( 111) 00:11:59.582 10604.916 - 10664.495: 87.1715% ( 99) 00:11:59.582 10664.495 - 10724.073: 87.8365% ( 83) 00:11:59.582 10724.073 - 10783.651: 88.3013% ( 58) 00:11:59.582 10783.651 - 10843.229: 88.6458% ( 43) 00:11:59.582 10843.229 - 10902.807: 88.9503% ( 38) 00:11:59.582 10902.807 - 10962.385: 89.1747% ( 28) 00:11:59.582 10962.385 - 11021.964: 89.3750% ( 25) 00:11:59.582 11021.964 - 11081.542: 89.5673% ( 24) 00:11:59.582 11081.542 - 11141.120: 89.7196% ( 19) 00:11:59.582 11141.120 - 11200.698: 89.8397% ( 15) 00:11:59.582 11200.698 - 11260.276: 89.9599% ( 15) 00:11:59.582 11260.276 - 11319.855: 90.0962% ( 17) 00:11:59.582 11319.855 - 11379.433: 90.2083% ( 14) 00:11:59.582 11379.433 - 11439.011: 90.3045% ( 12) 00:11:59.582 11439.011 - 11498.589: 90.3926% ( 11) 00:11:59.582 11498.589 - 11558.167: 90.4728% ( 10) 00:11:59.582 11558.167 - 11617.745: 90.5449% ( 9) 00:11:59.582 11617.745 - 11677.324: 90.6170% ( 9) 00:11:59.582 11677.324 - 11736.902: 90.6971% ( 10) 00:11:59.582 11736.902 - 11796.480: 90.7933% ( 12) 00:11:59.582 11796.480 - 11856.058: 90.8654% ( 9) 00:11:59.582 11856.058 - 11915.636: 90.9696% ( 13) 00:11:59.582 11915.636 - 11975.215: 91.0417% ( 9) 00:11:59.582 11975.215 - 12034.793: 91.1138% ( 9) 00:11:59.582 12034.793 - 12094.371: 91.2019% ( 11) 00:11:59.582 12094.371 - 12153.949: 91.3061% ( 13) 00:11:59.582 12153.949 - 12213.527: 91.4423% ( 17) 00:11:59.582 12213.527 - 12273.105: 91.5625% ( 15) 00:11:59.582 12273.105 - 12332.684: 91.6907% ( 16) 00:11:59.582 12332.684 - 12392.262: 91.7949% ( 13) 00:11:59.582 12392.262 - 12451.840: 91.9151% ( 15) 00:11:59.582 12451.840 - 12511.418: 92.0593% ( 18) 00:11:59.582 12511.418 - 12570.996: 92.2196% ( 20) 00:11:59.582 12570.996 - 12630.575: 92.4038% ( 23) 00:11:59.582 12630.575 - 12690.153: 92.6122% ( 26) 00:11:59.582 12690.153 - 12749.731: 92.8446% ( 29) 00:11:59.582 12749.731 - 12809.309: 93.1170% ( 34) 00:11:59.582 12809.309 - 12868.887: 93.4215% ( 38) 00:11:59.582 12868.887 - 12928.465: 93.7019% ( 35) 00:11:59.582 12928.465 - 12988.044: 94.0224% ( 40) 00:11:59.582 12988.044 - 13047.622: 94.3590% ( 42) 00:11:59.582 13047.622 - 13107.200: 94.7436% ( 48) 00:11:59.582 13107.200 - 13166.778: 95.1362% ( 49) 00:11:59.582 13166.778 - 13226.356: 95.5288% ( 49) 00:11:59.582 13226.356 - 13285.935: 95.9054% ( 47) 00:11:59.582 13285.935 - 13345.513: 96.2500% ( 43) 00:11:59.582 13345.513 - 13405.091: 96.5465% ( 37) 00:11:59.582 13405.091 - 13464.669: 96.8830% ( 42) 00:11:59.582 13464.669 - 13524.247: 97.1955% ( 39) 00:11:59.582 13524.247 - 13583.825: 97.4519% ( 32) 00:11:59.582 13583.825 - 13643.404: 97.7083% ( 32) 00:11:59.582 13643.404 - 13702.982: 97.9407% ( 29) 00:11:59.582 13702.982 - 13762.560: 98.1250% ( 23) 00:11:59.582 13762.560 - 13822.138: 98.3173% ( 24) 00:11:59.582 13822.138 - 13881.716: 98.4615% ( 18) 00:11:59.582 13881.716 - 13941.295: 98.5897% ( 16) 00:11:59.582 13941.295 - 14000.873: 98.7260% ( 17) 00:11:59.582 14000.873 - 14060.451: 98.8622% ( 17) 00:11:59.583 14060.451 - 14120.029: 98.9183% ( 7) 00:11:59.583 14120.029 - 14179.607: 98.9663% ( 6) 00:11:59.583 14179.607 - 14239.185: 98.9744% ( 1) 00:11:59.583 24427.055 - 24546.211: 98.9904% ( 2) 00:11:59.583 24546.211 - 24665.367: 99.0064% ( 2) 00:11:59.583 24665.367 - 24784.524: 99.0304% ( 3) 00:11:59.583 24784.524 - 24903.680: 99.0545% ( 3) 00:11:59.583 24903.680 - 25022.836: 99.0785% ( 3) 00:11:59.583 25022.836 - 25141.993: 99.1026% ( 3) 00:11:59.583 25141.993 - 25261.149: 99.1346% ( 4) 00:11:59.583 25261.149 - 25380.305: 99.1587% ( 3) 00:11:59.583 25380.305 - 25499.462: 99.1827% ( 3) 00:11:59.583 25499.462 - 25618.618: 99.2067% ( 3) 00:11:59.583 25618.618 - 25737.775: 99.2388% ( 4) 00:11:59.583 25737.775 - 25856.931: 99.2548% ( 2) 00:11:59.583 25856.931 - 25976.087: 99.2869% ( 4) 00:11:59.583 25976.087 - 26095.244: 99.3109% ( 3) 00:11:59.583 26095.244 - 26214.400: 99.3349% ( 3) 00:11:59.583 26214.400 - 26333.556: 99.3590% ( 3) 00:11:59.583 26333.556 - 26452.713: 99.3830% ( 3) 00:11:59.583 26452.713 - 26571.869: 99.4071% ( 3) 00:11:59.583 26571.869 - 26691.025: 99.4391% ( 4) 00:11:59.583 26691.025 - 26810.182: 99.4712% ( 4) 00:11:59.583 26810.182 - 26929.338: 99.4872% ( 2) 00:11:59.583 31695.593 - 31933.905: 99.5032% ( 2) 00:11:59.583 31933.905 - 32172.218: 99.5673% ( 8) 00:11:59.583 32172.218 - 32410.531: 99.6314% ( 8) 00:11:59.583 32410.531 - 32648.844: 99.6875% ( 7) 00:11:59.583 32648.844 - 32887.156: 99.7436% ( 7) 00:11:59.583 32887.156 - 33125.469: 99.8077% ( 8) 00:11:59.583 33125.469 - 33363.782: 99.8718% ( 8) 00:11:59.583 33363.782 - 33602.095: 99.9359% ( 8) 00:11:59.583 33602.095 - 33840.407: 100.0000% ( 8) 00:11:59.583 00:11:59.583 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:59.583 ============================================================================== 00:11:59.583 Range in us Cumulative IO count 00:11:59.583 8638.836 - 8698.415: 0.0401% ( 5) 00:11:59.583 8698.415 - 8757.993: 0.2003% ( 20) 00:11:59.583 8757.993 - 8817.571: 0.4888% ( 36) 00:11:59.583 8817.571 - 8877.149: 1.1619% ( 84) 00:11:59.583 8877.149 - 8936.727: 1.9872% ( 103) 00:11:59.583 8936.727 - 8996.305: 2.9808% ( 124) 00:11:59.583 8996.305 - 9055.884: 4.4231% ( 180) 00:11:59.583 9055.884 - 9115.462: 6.4503% ( 253) 00:11:59.583 9115.462 - 9175.040: 8.6699% ( 277) 00:11:59.583 9175.040 - 9234.618: 11.0417% ( 296) 00:11:59.583 9234.618 - 9294.196: 13.8381% ( 349) 00:11:59.583 9294.196 - 9353.775: 17.2516% ( 426) 00:11:59.583 9353.775 - 9413.353: 20.8654% ( 451) 00:11:59.583 9413.353 - 9472.931: 24.9038% ( 504) 00:11:59.583 9472.931 - 9532.509: 28.9904% ( 510) 00:11:59.583 9532.509 - 9592.087: 33.3894% ( 549) 00:11:59.583 9592.087 - 9651.665: 38.0369% ( 580) 00:11:59.583 9651.665 - 9711.244: 42.4760% ( 554) 00:11:59.583 9711.244 - 9770.822: 47.0673% ( 573) 00:11:59.583 9770.822 - 9830.400: 51.5465% ( 559) 00:11:59.583 9830.400 - 9889.978: 55.8734% ( 540) 00:11:59.583 9889.978 - 9949.556: 60.0160% ( 517) 00:11:59.583 9949.556 - 10009.135: 64.1106% ( 511) 00:11:59.583 10009.135 - 10068.713: 67.7644% ( 456) 00:11:59.583 10068.713 - 10128.291: 71.0256% ( 407) 00:11:59.583 10128.291 - 10187.869: 73.9824% ( 369) 00:11:59.583 10187.869 - 10247.447: 76.7708% ( 348) 00:11:59.583 10247.447 - 10307.025: 79.1186% ( 293) 00:11:59.583 10307.025 - 10366.604: 81.1859% ( 258) 00:11:59.583 10366.604 - 10426.182: 82.9247% ( 217) 00:11:59.583 10426.182 - 10485.760: 84.3590% ( 179) 00:11:59.583 10485.760 - 10545.338: 85.4647% ( 138) 00:11:59.583 10545.338 - 10604.916: 86.4663% ( 125) 00:11:59.583 10604.916 - 10664.495: 87.2676% ( 100) 00:11:59.583 10664.495 - 10724.073: 87.8686% ( 75) 00:11:59.583 10724.073 - 10783.651: 88.3253% ( 57) 00:11:59.583 10783.651 - 10843.229: 88.7179% ( 49) 00:11:59.583 10843.229 - 10902.807: 89.0224% ( 38) 00:11:59.583 10902.807 - 10962.385: 89.2788% ( 32) 00:11:59.583 10962.385 - 11021.964: 89.4872% ( 26) 00:11:59.583 11021.964 - 11081.542: 89.6875% ( 25) 00:11:59.583 11081.542 - 11141.120: 89.8798% ( 24) 00:11:59.583 11141.120 - 11200.698: 90.0721% ( 24) 00:11:59.583 11200.698 - 11260.276: 90.2404% ( 21) 00:11:59.583 11260.276 - 11319.855: 90.3285% ( 11) 00:11:59.583 11319.855 - 11379.433: 90.4247% ( 12) 00:11:59.583 11379.433 - 11439.011: 90.4888% ( 8) 00:11:59.583 11439.011 - 11498.589: 90.5609% ( 9) 00:11:59.583 11498.589 - 11558.167: 90.6410% ( 10) 00:11:59.583 11558.167 - 11617.745: 90.7051% ( 8) 00:11:59.583 11617.745 - 11677.324: 90.7772% ( 9) 00:11:59.583 11677.324 - 11736.902: 90.8253% ( 6) 00:11:59.583 11736.902 - 11796.480: 90.8814% ( 7) 00:11:59.583 11796.480 - 11856.058: 90.9215% ( 5) 00:11:59.583 11856.058 - 11915.636: 90.9776% ( 7) 00:11:59.583 11915.636 - 11975.215: 91.0256% ( 6) 00:11:59.583 11975.215 - 12034.793: 91.0657% ( 5) 00:11:59.583 12034.793 - 12094.371: 91.1378% ( 9) 00:11:59.583 12094.371 - 12153.949: 91.2099% ( 9) 00:11:59.583 12153.949 - 12213.527: 91.3061% ( 12) 00:11:59.583 12213.527 - 12273.105: 91.4022% ( 12) 00:11:59.583 12273.105 - 12332.684: 91.4904% ( 11) 00:11:59.583 12332.684 - 12392.262: 91.5865% ( 12) 00:11:59.583 12392.262 - 12451.840: 91.6827% ( 12) 00:11:59.583 12451.840 - 12511.418: 91.8029% ( 15) 00:11:59.583 12511.418 - 12570.996: 91.9872% ( 23) 00:11:59.583 12570.996 - 12630.575: 92.1955% ( 26) 00:11:59.583 12630.575 - 12690.153: 92.4279% ( 29) 00:11:59.583 12690.153 - 12749.731: 92.7083% ( 35) 00:11:59.583 12749.731 - 12809.309: 92.9567% ( 31) 00:11:59.583 12809.309 - 12868.887: 93.2292% ( 34) 00:11:59.583 12868.887 - 12928.465: 93.5657% ( 42) 00:11:59.583 12928.465 - 12988.044: 93.9343% ( 46) 00:11:59.583 12988.044 - 13047.622: 94.3510% ( 52) 00:11:59.583 13047.622 - 13107.200: 94.7196% ( 46) 00:11:59.583 13107.200 - 13166.778: 95.0962% ( 47) 00:11:59.583 13166.778 - 13226.356: 95.4487% ( 44) 00:11:59.583 13226.356 - 13285.935: 95.8253% ( 47) 00:11:59.583 13285.935 - 13345.513: 96.2019% ( 47) 00:11:59.583 13345.513 - 13405.091: 96.6186% ( 52) 00:11:59.583 13405.091 - 13464.669: 96.9471% ( 41) 00:11:59.583 13464.669 - 13524.247: 97.2436% ( 37) 00:11:59.583 13524.247 - 13583.825: 97.5080% ( 33) 00:11:59.583 13583.825 - 13643.404: 97.7404% ( 29) 00:11:59.583 13643.404 - 13702.982: 97.9247% ( 23) 00:11:59.583 13702.982 - 13762.560: 98.0849% ( 20) 00:11:59.583 13762.560 - 13822.138: 98.2612% ( 22) 00:11:59.583 13822.138 - 13881.716: 98.4215% ( 20) 00:11:59.583 13881.716 - 13941.295: 98.5337% ( 14) 00:11:59.583 13941.295 - 14000.873: 98.6218% ( 11) 00:11:59.583 14000.873 - 14060.451: 98.6699% ( 6) 00:11:59.583 14060.451 - 14120.029: 98.7260% ( 7) 00:11:59.583 14120.029 - 14179.607: 98.7821% ( 7) 00:11:59.583 14179.607 - 14239.185: 98.8381% ( 7) 00:11:59.583 14239.185 - 14298.764: 98.8782% ( 5) 00:11:59.583 14298.764 - 14358.342: 98.9103% ( 4) 00:11:59.583 14358.342 - 14417.920: 98.9423% ( 4) 00:11:59.583 14417.920 - 14477.498: 98.9744% ( 4) 00:11:59.583 22401.396 - 22520.553: 98.9984% ( 3) 00:11:59.583 22520.553 - 22639.709: 99.0144% ( 2) 00:11:59.583 22639.709 - 22758.865: 99.0465% ( 4) 00:11:59.583 22758.865 - 22878.022: 99.0705% ( 3) 00:11:59.583 22878.022 - 22997.178: 99.0946% ( 3) 00:11:59.583 22997.178 - 23116.335: 99.1186% ( 3) 00:11:59.583 23116.335 - 23235.491: 99.1506% ( 4) 00:11:59.583 23235.491 - 23354.647: 99.1747% ( 3) 00:11:59.583 23354.647 - 23473.804: 99.2067% ( 4) 00:11:59.583 23473.804 - 23592.960: 99.2308% ( 3) 00:11:59.583 23592.960 - 23712.116: 99.2468% ( 2) 00:11:59.583 23712.116 - 23831.273: 99.2788% ( 4) 00:11:59.583 23831.273 - 23950.429: 99.3029% ( 3) 00:11:59.583 23950.429 - 24069.585: 99.3269% ( 3) 00:11:59.583 24069.585 - 24188.742: 99.3510% ( 3) 00:11:59.583 24188.742 - 24307.898: 99.3750% ( 3) 00:11:59.583 24307.898 - 24427.055: 99.4071% ( 4) 00:11:59.583 24427.055 - 24546.211: 99.4311% ( 3) 00:11:59.583 24546.211 - 24665.367: 99.4631% ( 4) 00:11:59.583 24665.367 - 24784.524: 99.4872% ( 3) 00:11:59.583 29550.778 - 29669.935: 99.4952% ( 1) 00:11:59.583 29669.935 - 29789.091: 99.5272% ( 4) 00:11:59.583 29789.091 - 29908.247: 99.5593% ( 4) 00:11:59.583 29908.247 - 30027.404: 99.5833% ( 3) 00:11:59.583 30027.404 - 30146.560: 99.6154% ( 4) 00:11:59.583 30146.560 - 30265.716: 99.6394% ( 3) 00:11:59.583 30265.716 - 30384.873: 99.6715% ( 4) 00:11:59.583 30384.873 - 30504.029: 99.7035% ( 4) 00:11:59.583 30504.029 - 30742.342: 99.7596% ( 7) 00:11:59.583 30742.342 - 30980.655: 99.8157% ( 7) 00:11:59.583 30980.655 - 31218.967: 99.8798% ( 8) 00:11:59.583 31218.967 - 31457.280: 99.9439% ( 8) 00:11:59.583 31457.280 - 31695.593: 100.0000% ( 7) 00:11:59.583 00:11:59.583 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:59.583 ============================================================================== 00:11:59.583 Range in us Cumulative IO count 00:11:59.583 8638.836 - 8698.415: 0.0160% ( 2) 00:11:59.583 8698.415 - 8757.993: 0.1202% ( 13) 00:11:59.583 8757.993 - 8817.571: 0.4487% ( 41) 00:11:59.583 8817.571 - 8877.149: 1.0978% ( 81) 00:11:59.583 8877.149 - 8936.727: 1.9471% ( 106) 00:11:59.583 8936.727 - 8996.305: 2.9327% ( 123) 00:11:59.583 8996.305 - 9055.884: 4.3590% ( 178) 00:11:59.583 9055.884 - 9115.462: 6.1699% ( 226) 00:11:59.583 9115.462 - 9175.040: 8.2131% ( 255) 00:11:59.583 9175.040 - 9234.618: 10.9215% ( 338) 00:11:59.583 9234.618 - 9294.196: 13.9744% ( 381) 00:11:59.583 9294.196 - 9353.775: 17.3558% ( 422) 00:11:59.583 9353.775 - 9413.353: 20.9375% ( 447) 00:11:59.584 9413.353 - 9472.931: 24.9038% ( 495) 00:11:59.584 9472.931 - 9532.509: 29.1186% ( 526) 00:11:59.584 9532.509 - 9592.087: 33.5256% ( 550) 00:11:59.584 9592.087 - 9651.665: 38.0449% ( 564) 00:11:59.584 9651.665 - 9711.244: 42.6923% ( 580) 00:11:59.584 9711.244 - 9770.822: 47.5080% ( 601) 00:11:59.584 9770.822 - 9830.400: 52.0994% ( 573) 00:11:59.584 9830.400 - 9889.978: 56.4583% ( 544) 00:11:59.584 9889.978 - 9949.556: 60.6330% ( 521) 00:11:59.584 9949.556 - 10009.135: 64.4551% ( 477) 00:11:59.584 10009.135 - 10068.713: 68.0609% ( 450) 00:11:59.584 10068.713 - 10128.291: 71.2420% ( 397) 00:11:59.584 10128.291 - 10187.869: 74.1426% ( 362) 00:11:59.584 10187.869 - 10247.447: 76.7067% ( 320) 00:11:59.584 10247.447 - 10307.025: 79.0946% ( 298) 00:11:59.584 10307.025 - 10366.604: 81.0737% ( 247) 00:11:59.584 10366.604 - 10426.182: 82.7163% ( 205) 00:11:59.584 10426.182 - 10485.760: 84.1587% ( 180) 00:11:59.584 10485.760 - 10545.338: 85.3285% ( 146) 00:11:59.584 10545.338 - 10604.916: 86.2901% ( 120) 00:11:59.584 10604.916 - 10664.495: 87.0994% ( 101) 00:11:59.584 10664.495 - 10724.073: 87.7244% ( 78) 00:11:59.584 10724.073 - 10783.651: 88.2131% ( 61) 00:11:59.584 10783.651 - 10843.229: 88.6538% ( 55) 00:11:59.584 10843.229 - 10902.807: 88.9824% ( 41) 00:11:59.584 10902.807 - 10962.385: 89.2949% ( 39) 00:11:59.584 10962.385 - 11021.964: 89.5833% ( 36) 00:11:59.584 11021.964 - 11081.542: 89.8157% ( 29) 00:11:59.584 11081.542 - 11141.120: 90.0080% ( 24) 00:11:59.584 11141.120 - 11200.698: 90.1522% ( 18) 00:11:59.584 11200.698 - 11260.276: 90.2724% ( 15) 00:11:59.584 11260.276 - 11319.855: 90.3526% ( 10) 00:11:59.584 11319.855 - 11379.433: 90.4087% ( 7) 00:11:59.584 11379.433 - 11439.011: 90.4567% ( 6) 00:11:59.584 11439.011 - 11498.589: 90.5369% ( 10) 00:11:59.584 11498.589 - 11558.167: 90.5929% ( 7) 00:11:59.584 11558.167 - 11617.745: 90.6490% ( 7) 00:11:59.584 11617.745 - 11677.324: 90.6971% ( 6) 00:11:59.584 11677.324 - 11736.902: 90.7532% ( 7) 00:11:59.584 11736.902 - 11796.480: 90.8333% ( 10) 00:11:59.584 11796.480 - 11856.058: 90.9054% ( 9) 00:11:59.584 11856.058 - 11915.636: 90.9535% ( 6) 00:11:59.584 11915.636 - 11975.215: 91.0256% ( 9) 00:11:59.584 11975.215 - 12034.793: 91.0978% ( 9) 00:11:59.584 12034.793 - 12094.371: 91.1859% ( 11) 00:11:59.584 12094.371 - 12153.949: 91.2500% ( 8) 00:11:59.584 12153.949 - 12213.527: 91.3462% ( 12) 00:11:59.584 12213.527 - 12273.105: 91.4423% ( 12) 00:11:59.584 12273.105 - 12332.684: 91.5545% ( 14) 00:11:59.584 12332.684 - 12392.262: 91.6667% ( 14) 00:11:59.584 12392.262 - 12451.840: 91.8109% ( 18) 00:11:59.584 12451.840 - 12511.418: 91.9792% ( 21) 00:11:59.584 12511.418 - 12570.996: 92.1394% ( 20) 00:11:59.584 12570.996 - 12630.575: 92.2997% ( 20) 00:11:59.584 12630.575 - 12690.153: 92.5080% ( 26) 00:11:59.584 12690.153 - 12749.731: 92.7083% ( 25) 00:11:59.584 12749.731 - 12809.309: 92.9327% ( 28) 00:11:59.584 12809.309 - 12868.887: 93.2051% ( 34) 00:11:59.584 12868.887 - 12928.465: 93.5497% ( 43) 00:11:59.584 12928.465 - 12988.044: 93.8702% ( 40) 00:11:59.584 12988.044 - 13047.622: 94.2468% ( 47) 00:11:59.584 13047.622 - 13107.200: 94.5913% ( 43) 00:11:59.584 13107.200 - 13166.778: 94.9599% ( 46) 00:11:59.584 13166.778 - 13226.356: 95.3285% ( 46) 00:11:59.584 13226.356 - 13285.935: 95.6330% ( 38) 00:11:59.584 13285.935 - 13345.513: 95.9776% ( 43) 00:11:59.584 13345.513 - 13405.091: 96.2981% ( 40) 00:11:59.584 13405.091 - 13464.669: 96.6426% ( 43) 00:11:59.584 13464.669 - 13524.247: 96.9391% ( 37) 00:11:59.584 13524.247 - 13583.825: 97.2115% ( 34) 00:11:59.584 13583.825 - 13643.404: 97.4599% ( 31) 00:11:59.584 13643.404 - 13702.982: 97.7163% ( 32) 00:11:59.584 13702.982 - 13762.560: 97.9407% ( 28) 00:11:59.584 13762.560 - 13822.138: 98.1410% ( 25) 00:11:59.584 13822.138 - 13881.716: 98.2933% ( 19) 00:11:59.584 13881.716 - 13941.295: 98.3894% ( 12) 00:11:59.584 13941.295 - 14000.873: 98.4696% ( 10) 00:11:59.584 14000.873 - 14060.451: 98.5497% ( 10) 00:11:59.584 14060.451 - 14120.029: 98.6378% ( 11) 00:11:59.584 14120.029 - 14179.607: 98.7099% ( 9) 00:11:59.584 14179.607 - 14239.185: 98.7981% ( 11) 00:11:59.584 14239.185 - 14298.764: 98.8622% ( 8) 00:11:59.584 14298.764 - 14358.342: 98.8862% ( 3) 00:11:59.584 14358.342 - 14417.920: 98.9183% ( 4) 00:11:59.584 14417.920 - 14477.498: 98.9503% ( 4) 00:11:59.584 14477.498 - 14537.076: 98.9744% ( 3) 00:11:59.584 20375.738 - 20494.895: 98.9904% ( 2) 00:11:59.584 20494.895 - 20614.051: 99.0224% ( 4) 00:11:59.584 20614.051 - 20733.207: 99.0465% ( 3) 00:11:59.584 20733.207 - 20852.364: 99.0625% ( 2) 00:11:59.584 20852.364 - 20971.520: 99.0946% ( 4) 00:11:59.584 20971.520 - 21090.676: 99.1266% ( 4) 00:11:59.584 21090.676 - 21209.833: 99.1506% ( 3) 00:11:59.584 21209.833 - 21328.989: 99.1827% ( 4) 00:11:59.584 21328.989 - 21448.145: 99.2067% ( 3) 00:11:59.584 21448.145 - 21567.302: 99.2308% ( 3) 00:11:59.584 21567.302 - 21686.458: 99.2548% ( 3) 00:11:59.584 21686.458 - 21805.615: 99.2788% ( 3) 00:11:59.584 21805.615 - 21924.771: 99.3029% ( 3) 00:11:59.584 21924.771 - 22043.927: 99.3349% ( 4) 00:11:59.584 22043.927 - 22163.084: 99.3590% ( 3) 00:11:59.584 22163.084 - 22282.240: 99.3830% ( 3) 00:11:59.584 22282.240 - 22401.396: 99.4151% ( 4) 00:11:59.584 22401.396 - 22520.553: 99.4391% ( 3) 00:11:59.584 22520.553 - 22639.709: 99.4712% ( 4) 00:11:59.584 22639.709 - 22758.865: 99.4872% ( 2) 00:11:59.584 27286.807 - 27405.964: 99.5112% ( 3) 00:11:59.584 27405.964 - 27525.120: 99.5433% ( 4) 00:11:59.584 27525.120 - 27644.276: 99.5753% ( 4) 00:11:59.584 27644.276 - 27763.433: 99.6074% ( 4) 00:11:59.584 27763.433 - 27882.589: 99.6394% ( 4) 00:11:59.584 27882.589 - 28001.745: 99.6635% ( 3) 00:11:59.584 28001.745 - 28120.902: 99.6875% ( 3) 00:11:59.584 28120.902 - 28240.058: 99.7196% ( 4) 00:11:59.584 28240.058 - 28359.215: 99.7516% ( 4) 00:11:59.584 28359.215 - 28478.371: 99.7756% ( 3) 00:11:59.584 28478.371 - 28597.527: 99.8077% ( 4) 00:11:59.584 28597.527 - 28716.684: 99.8478% ( 5) 00:11:59.584 28716.684 - 28835.840: 99.8718% ( 3) 00:11:59.584 28835.840 - 28954.996: 99.8958% ( 3) 00:11:59.584 28954.996 - 29074.153: 99.9199% ( 3) 00:11:59.584 29074.153 - 29193.309: 99.9519% ( 4) 00:11:59.584 29193.309 - 29312.465: 99.9840% ( 4) 00:11:59.584 29312.465 - 29431.622: 100.0000% ( 2) 00:11:59.584 00:11:59.584 09:08:54 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:00.960 Initializing NVMe Controllers 00:12:00.960 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:00.960 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:00.960 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:00.960 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:00.960 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:00.960 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:00.960 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:00.960 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:00.960 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:00.960 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:00.960 Initialization complete. Launching workers. 00:12:00.960 ======================================================== 00:12:00.960 Latency(us) 00:12:00.960 Device Information : IOPS MiB/s Average min max 00:12:00.960 PCIE (0000:00:10.0) NSID 1 from core 0: 10379.48 121.63 12367.84 8369.09 41111.86 00:12:00.960 PCIE (0000:00:11.0) NSID 1 from core 0: 10379.48 121.63 12344.76 8557.71 38636.68 00:12:00.960 PCIE (0000:00:13.0) NSID 1 from core 0: 10379.48 121.63 12321.34 8437.81 37157.71 00:12:00.960 PCIE (0000:00:12.0) NSID 1 from core 0: 10379.48 121.63 12295.90 8484.85 34182.96 00:12:00.960 PCIE (0000:00:12.0) NSID 2 from core 0: 10379.48 121.63 12271.12 8287.57 32229.58 00:12:00.960 PCIE (0000:00:12.0) NSID 3 from core 0: 10379.48 121.63 12245.99 8575.89 29581.06 00:12:00.960 ======================================================== 00:12:00.961 Total : 62276.88 729.81 12307.83 8287.57 41111.86 00:12:00.961 00:12:00.961 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:00.961 ================================================================================= 00:12:00.961 1.00000% : 9294.196us 00:12:00.961 10.00000% : 10128.291us 00:12:00.961 25.00000% : 10545.338us 00:12:00.961 50.00000% : 11379.433us 00:12:00.961 75.00000% : 13226.356us 00:12:00.961 90.00000% : 15966.953us 00:12:00.961 95.00000% : 16443.578us 00:12:00.961 98.00000% : 17039.360us 00:12:00.961 99.00000% : 30384.873us 00:12:00.961 99.50000% : 39321.600us 00:12:00.961 99.90000% : 40989.789us 00:12:00.961 99.99000% : 41228.102us 00:12:00.961 99.99900% : 41228.102us 00:12:00.961 99.99990% : 41228.102us 00:12:00.961 99.99999% : 41228.102us 00:12:00.961 00:12:00.961 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:00.961 ================================================================================= 00:12:00.961 1.00000% : 9234.618us 00:12:00.961 10.00000% : 10187.869us 00:12:00.961 25.00000% : 10545.338us 00:12:00.961 50.00000% : 11260.276us 00:12:00.961 75.00000% : 13107.200us 00:12:00.961 90.00000% : 15847.796us 00:12:00.961 95.00000% : 16324.422us 00:12:00.961 98.00000% : 16920.204us 00:12:00.961 99.00000% : 29550.778us 00:12:00.961 99.50000% : 36938.473us 00:12:00.961 99.90000% : 38368.349us 00:12:00.961 99.99000% : 38606.662us 00:12:00.961 99.99900% : 38844.975us 00:12:00.961 99.99990% : 38844.975us 00:12:00.961 99.99999% : 38844.975us 00:12:00.961 00:12:00.961 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:00.961 ================================================================================= 00:12:00.961 1.00000% : 9353.775us 00:12:00.961 10.00000% : 10187.869us 00:12:00.961 25.00000% : 10604.916us 00:12:00.961 50.00000% : 11260.276us 00:12:00.961 75.00000% : 13166.778us 00:12:00.961 90.00000% : 15847.796us 00:12:00.961 95.00000% : 16324.422us 00:12:00.961 98.00000% : 16681.891us 00:12:00.961 99.00000% : 28120.902us 00:12:00.961 99.50000% : 35508.596us 00:12:00.961 99.90000% : 36938.473us 00:12:00.961 99.99000% : 37176.785us 00:12:00.961 99.99900% : 37176.785us 00:12:00.961 99.99990% : 37176.785us 00:12:00.961 99.99999% : 37176.785us 00:12:00.961 00:12:00.961 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:00.961 ================================================================================= 00:12:00.961 1.00000% : 9353.775us 00:12:00.961 10.00000% : 10187.869us 00:12:00.961 25.00000% : 10545.338us 00:12:00.961 50.00000% : 11260.276us 00:12:00.961 75.00000% : 13107.200us 00:12:00.961 90.00000% : 15847.796us 00:12:00.961 95.00000% : 16324.422us 00:12:00.961 98.00000% : 16801.047us 00:12:00.961 99.00000% : 26571.869us 00:12:00.961 99.50000% : 31457.280us 00:12:00.961 99.90000% : 33840.407us 00:12:00.961 99.99000% : 34317.033us 00:12:00.961 99.99900% : 34317.033us 00:12:00.961 99.99990% : 34317.033us 00:12:00.961 99.99999% : 34317.033us 00:12:00.961 00:12:00.961 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:00.961 ================================================================================= 00:12:00.961 1.00000% : 9353.775us 00:12:00.961 10.00000% : 10187.869us 00:12:00.961 25.00000% : 10545.338us 00:12:00.961 50.00000% : 11319.855us 00:12:00.961 75.00000% : 13047.622us 00:12:00.961 90.00000% : 15847.796us 00:12:00.961 95.00000% : 16324.422us 00:12:00.961 98.00000% : 16801.047us 00:12:00.961 99.00000% : 23235.491us 00:12:00.961 99.50000% : 30504.029us 00:12:00.961 99.90000% : 31933.905us 00:12:00.961 99.99000% : 32410.531us 00:12:00.961 99.99900% : 32410.531us 00:12:00.961 99.99990% : 32410.531us 00:12:00.961 99.99999% : 32410.531us 00:12:00.961 00:12:00.961 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:00.961 ================================================================================= 00:12:00.961 1.00000% : 9294.196us 00:12:00.961 10.00000% : 10187.869us 00:12:00.961 25.00000% : 10545.338us 00:12:00.961 50.00000% : 11319.855us 00:12:00.961 75.00000% : 13107.200us 00:12:00.961 90.00000% : 15847.796us 00:12:00.961 95.00000% : 16443.578us 00:12:00.961 98.00000% : 16920.204us 00:12:00.961 99.00000% : 21448.145us 00:12:00.961 99.50000% : 26810.182us 00:12:00.961 99.90000% : 29193.309us 00:12:00.961 99.99000% : 29550.778us 00:12:00.961 99.99900% : 29669.935us 00:12:00.961 99.99990% : 29669.935us 00:12:00.961 99.99999% : 29669.935us 00:12:00.961 00:12:00.961 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:12:00.961 ============================================================================== 00:12:00.961 Range in us Cumulative IO count 00:12:00.961 8340.945 - 8400.524: 0.0192% ( 2) 00:12:00.961 8400.524 - 8460.102: 0.0959% ( 8) 00:12:00.961 8519.680 - 8579.258: 0.1150% ( 2) 00:12:00.961 8579.258 - 8638.836: 0.1342% ( 2) 00:12:00.961 8638.836 - 8698.415: 0.1438% ( 1) 00:12:00.961 8698.415 - 8757.993: 0.1630% ( 2) 00:12:00.961 8757.993 - 8817.571: 0.1725% ( 1) 00:12:00.961 8817.571 - 8877.149: 0.2013% ( 3) 00:12:00.961 8877.149 - 8936.727: 0.2972% ( 10) 00:12:00.961 8936.727 - 8996.305: 0.4122% ( 12) 00:12:00.961 8996.305 - 9055.884: 0.5272% ( 12) 00:12:00.961 9055.884 - 9115.462: 0.6710% ( 15) 00:12:00.961 9115.462 - 9175.040: 0.8052% ( 14) 00:12:00.961 9175.040 - 9234.618: 0.9682% ( 17) 00:12:00.961 9234.618 - 9294.196: 1.1599% ( 20) 00:12:00.961 9294.196 - 9353.775: 1.3037% ( 15) 00:12:00.961 9353.775 - 9413.353: 1.4187% ( 12) 00:12:00.961 9413.353 - 9472.931: 1.6392% ( 23) 00:12:00.961 9472.931 - 9532.509: 2.0610% ( 44) 00:12:00.961 9532.509 - 9592.087: 2.4156% ( 37) 00:12:00.961 9592.087 - 9651.665: 2.8183% ( 42) 00:12:00.961 9651.665 - 9711.244: 3.3359% ( 54) 00:12:00.961 9711.244 - 9770.822: 3.9686% ( 66) 00:12:00.961 9770.822 - 9830.400: 4.9367% ( 101) 00:12:00.961 9830.400 - 9889.978: 6.1350% ( 125) 00:12:00.961 9889.978 - 9949.556: 7.3236% ( 124) 00:12:00.961 9949.556 - 10009.135: 8.3877% ( 111) 00:12:00.961 10009.135 - 10068.713: 9.9885% ( 167) 00:12:00.961 10068.713 - 10128.291: 11.9632% ( 206) 00:12:00.961 10128.291 - 10187.869: 13.5736% ( 168) 00:12:00.961 10187.869 - 10247.447: 15.6825% ( 220) 00:12:00.961 10247.447 - 10307.025: 17.8010% ( 221) 00:12:00.961 10307.025 - 10366.604: 20.3700% ( 268) 00:12:00.961 10366.604 - 10426.182: 22.8432% ( 258) 00:12:00.961 10426.182 - 10485.760: 24.9137% ( 216) 00:12:00.961 10485.760 - 10545.338: 27.5882% ( 279) 00:12:00.961 10545.338 - 10604.916: 30.1668% ( 269) 00:12:00.961 10604.916 - 10664.495: 32.1319% ( 205) 00:12:00.961 10664.495 - 10724.073: 34.1641% ( 212) 00:12:00.961 10724.073 - 10783.651: 35.8416% ( 175) 00:12:00.961 10783.651 - 10843.229: 37.5000% ( 173) 00:12:00.961 10843.229 - 10902.807: 39.0625% ( 163) 00:12:00.961 10902.807 - 10962.385: 40.5483% ( 155) 00:12:00.961 10962.385 - 11021.964: 42.1108% ( 163) 00:12:00.961 11021.964 - 11081.542: 43.5104% ( 146) 00:12:00.961 11081.542 - 11141.120: 45.0633% ( 162) 00:12:00.961 11141.120 - 11200.698: 46.6066% ( 161) 00:12:00.961 11200.698 - 11260.276: 48.0828% ( 154) 00:12:00.961 11260.276 - 11319.855: 49.3865% ( 136) 00:12:00.961 11319.855 - 11379.433: 50.5752% ( 124) 00:12:00.961 11379.433 - 11439.011: 51.7063% ( 118) 00:12:00.961 11439.011 - 11498.589: 52.8278% ( 117) 00:12:00.961 11498.589 - 11558.167: 53.8535% ( 107) 00:12:00.961 11558.167 - 11617.745: 54.8409% ( 103) 00:12:00.961 11617.745 - 11677.324: 55.8186% ( 102) 00:12:00.961 11677.324 - 11736.902: 56.5567% ( 77) 00:12:00.961 11736.902 - 11796.480: 57.3524% ( 83) 00:12:00.961 11796.480 - 11856.058: 58.2822% ( 97) 00:12:00.961 11856.058 - 11915.636: 59.0874% ( 84) 00:12:00.961 11915.636 - 11975.215: 59.7584% ( 70) 00:12:00.961 11975.215 - 12034.793: 60.3048% ( 57) 00:12:00.962 12034.793 - 12094.371: 60.9950% ( 72) 00:12:00.962 12094.371 - 12153.949: 62.0974% ( 115) 00:12:00.962 12153.949 - 12213.527: 63.3052% ( 126) 00:12:00.962 12213.527 - 12273.105: 64.0913% ( 82) 00:12:00.962 12273.105 - 12332.684: 64.8869% ( 83) 00:12:00.962 12332.684 - 12392.262: 65.8263% ( 98) 00:12:00.962 12392.262 - 12451.840: 67.0629% ( 129) 00:12:00.962 12451.840 - 12511.418: 67.8777% ( 85) 00:12:00.962 12511.418 - 12570.996: 68.8075% ( 97) 00:12:00.962 12570.996 - 12630.575: 69.6031% ( 83) 00:12:00.962 12630.575 - 12690.153: 70.1208% ( 54) 00:12:00.962 12690.153 - 12749.731: 70.4659% ( 36) 00:12:00.962 12749.731 - 12809.309: 70.9356% ( 49) 00:12:00.962 12809.309 - 12868.887: 71.5874% ( 68) 00:12:00.962 12868.887 - 12928.465: 72.2009% ( 64) 00:12:00.962 12928.465 - 12988.044: 72.8623% ( 69) 00:12:00.962 12988.044 - 13047.622: 73.7059% ( 88) 00:12:00.962 13047.622 - 13107.200: 74.3002% ( 62) 00:12:00.962 13107.200 - 13166.778: 74.8083% ( 53) 00:12:00.962 13166.778 - 13226.356: 75.2588% ( 47) 00:12:00.962 13226.356 - 13285.935: 75.9011% ( 67) 00:12:00.962 13285.935 - 13345.513: 76.3037% ( 42) 00:12:00.962 13345.513 - 13405.091: 76.7255% ( 44) 00:12:00.962 13405.091 - 13464.669: 77.0226% ( 31) 00:12:00.962 13464.669 - 13524.247: 77.3677% ( 36) 00:12:00.962 13524.247 - 13583.825: 77.5882% ( 23) 00:12:00.962 13583.825 - 13643.404: 77.8087% ( 23) 00:12:00.962 13643.404 - 13702.982: 77.9716% ( 17) 00:12:00.962 13702.982 - 13762.560: 78.2017% ( 24) 00:12:00.962 13762.560 - 13822.138: 78.4413% ( 25) 00:12:00.962 13822.138 - 13881.716: 78.5947% ( 16) 00:12:00.962 13881.716 - 13941.295: 78.7673% ( 18) 00:12:00.962 13941.295 - 14000.873: 78.8631% ( 10) 00:12:00.962 14000.873 - 14060.451: 78.9877% ( 13) 00:12:00.962 14060.451 - 14120.029: 79.0548% ( 7) 00:12:00.962 14120.029 - 14179.607: 79.1890% ( 14) 00:12:00.962 14179.607 - 14239.185: 79.3328% ( 15) 00:12:00.962 14239.185 - 14298.764: 79.4287% ( 10) 00:12:00.962 14298.764 - 14358.342: 79.5054% ( 8) 00:12:00.962 14358.342 - 14417.920: 79.6108% ( 11) 00:12:00.962 14417.920 - 14477.498: 79.7067% ( 10) 00:12:00.962 14477.498 - 14537.076: 79.7450% ( 4) 00:12:00.962 14537.076 - 14596.655: 79.8121% ( 7) 00:12:00.962 14596.655 - 14656.233: 79.8505% ( 4) 00:12:00.962 14656.233 - 14715.811: 79.8696% ( 2) 00:12:00.962 14715.811 - 14775.389: 80.1476% ( 29) 00:12:00.962 14775.389 - 14834.967: 80.4640% ( 33) 00:12:00.962 14834.967 - 14894.545: 80.8666% ( 42) 00:12:00.962 14894.545 - 14954.124: 81.3842% ( 54) 00:12:00.962 14954.124 - 15013.702: 81.8156% ( 45) 00:12:00.962 15013.702 - 15073.280: 82.1511% ( 35) 00:12:00.962 15073.280 - 15132.858: 82.6687% ( 54) 00:12:00.962 15132.858 - 15192.436: 83.5027% ( 87) 00:12:00.962 15192.436 - 15252.015: 83.9149% ( 43) 00:12:00.962 15252.015 - 15371.171: 84.7968% ( 92) 00:12:00.962 15371.171 - 15490.327: 85.9950% ( 125) 00:12:00.962 15490.327 - 15609.484: 87.4329% ( 150) 00:12:00.962 15609.484 - 15728.640: 88.4873% ( 110) 00:12:00.962 15728.640 - 15847.796: 89.6856% ( 125) 00:12:00.962 15847.796 - 15966.953: 91.2385% ( 162) 00:12:00.962 15966.953 - 16086.109: 92.5997% ( 142) 00:12:00.962 16086.109 - 16205.265: 93.7500% ( 120) 00:12:00.962 16205.265 - 16324.422: 94.9866% ( 129) 00:12:00.962 16324.422 - 16443.578: 95.8206% ( 87) 00:12:00.962 16443.578 - 16562.735: 96.3957% ( 60) 00:12:00.962 16562.735 - 16681.891: 96.8942% ( 52) 00:12:00.962 16681.891 - 16801.047: 97.3447% ( 47) 00:12:00.962 16801.047 - 16920.204: 97.7857% ( 46) 00:12:00.962 16920.204 - 17039.360: 98.1116% ( 34) 00:12:00.962 17039.360 - 17158.516: 98.3416% ( 24) 00:12:00.962 17158.516 - 17277.673: 98.5429% ( 21) 00:12:00.962 17277.673 - 17396.829: 98.6580% ( 12) 00:12:00.962 17396.829 - 17515.985: 98.6867% ( 3) 00:12:00.962 17515.985 - 17635.142: 98.7155% ( 3) 00:12:00.962 17635.142 - 17754.298: 98.7730% ( 6) 00:12:00.962 29908.247 - 30027.404: 98.7826% ( 1) 00:12:00.962 30027.404 - 30146.560: 98.8689% ( 9) 00:12:00.962 30146.560 - 30265.716: 98.9743% ( 11) 00:12:00.962 30265.716 - 30384.873: 99.0318% ( 6) 00:12:00.962 30384.873 - 30504.029: 99.0893% ( 6) 00:12:00.962 30504.029 - 30742.342: 99.1277% ( 4) 00:12:00.962 30742.342 - 30980.655: 99.1660% ( 4) 00:12:00.962 30980.655 - 31218.967: 99.2044% ( 4) 00:12:00.962 31218.967 - 31457.280: 99.2619% ( 6) 00:12:00.962 31457.280 - 31695.593: 99.3194% ( 6) 00:12:00.962 31695.593 - 31933.905: 99.3865% ( 7) 00:12:00.962 38606.662 - 38844.975: 99.4344% ( 5) 00:12:00.962 38844.975 - 39083.287: 99.4919% ( 6) 00:12:00.962 39083.287 - 39321.600: 99.5495% ( 6) 00:12:00.962 39321.600 - 39559.913: 99.6166% ( 7) 00:12:00.962 39559.913 - 39798.225: 99.6645% ( 5) 00:12:00.962 39798.225 - 40036.538: 99.7220% ( 6) 00:12:00.962 40036.538 - 40274.851: 99.7891% ( 7) 00:12:00.962 40274.851 - 40513.164: 99.8562% ( 7) 00:12:00.962 40513.164 - 40751.476: 99.8946% ( 4) 00:12:00.962 40751.476 - 40989.789: 99.9712% ( 8) 00:12:00.962 40989.789 - 41228.102: 100.0000% ( 3) 00:12:00.962 00:12:00.962 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:12:00.962 ============================================================================== 00:12:00.962 Range in us Cumulative IO count 00:12:00.962 8519.680 - 8579.258: 0.0096% ( 1) 00:12:00.962 8579.258 - 8638.836: 0.0192% ( 1) 00:12:00.962 8757.993 - 8817.571: 0.0288% ( 1) 00:12:00.962 8877.149 - 8936.727: 0.0671% ( 4) 00:12:00.962 8936.727 - 8996.305: 0.3451% ( 29) 00:12:00.962 8996.305 - 9055.884: 0.6518% ( 32) 00:12:00.962 9055.884 - 9115.462: 0.7765% ( 13) 00:12:00.962 9115.462 - 9175.040: 0.8723% ( 10) 00:12:00.962 9175.040 - 9234.618: 1.0544% ( 19) 00:12:00.962 9234.618 - 9294.196: 1.1791% ( 13) 00:12:00.962 9294.196 - 9353.775: 1.2941% ( 12) 00:12:00.962 9353.775 - 9413.353: 1.4091% ( 12) 00:12:00.962 9413.353 - 9472.931: 1.5337% ( 13) 00:12:00.962 9472.931 - 9532.509: 1.7446% ( 22) 00:12:00.962 9532.509 - 9592.087: 1.8788% ( 14) 00:12:00.962 9592.087 - 9651.665: 2.1568% ( 29) 00:12:00.962 9651.665 - 9711.244: 2.9141% ( 79) 00:12:00.962 9711.244 - 9770.822: 3.3646% ( 47) 00:12:00.962 9770.822 - 9830.400: 3.7289% ( 38) 00:12:00.962 9830.400 - 9889.978: 4.5916% ( 90) 00:12:00.962 9889.978 - 9949.556: 5.5311% ( 98) 00:12:00.962 9949.556 - 10009.135: 6.6718% ( 119) 00:12:00.962 10009.135 - 10068.713: 7.9371% ( 132) 00:12:00.962 10068.713 - 10128.291: 9.0683% ( 118) 00:12:00.962 10128.291 - 10187.869: 10.6787% ( 168) 00:12:00.962 10187.869 - 10247.447: 12.4712% ( 187) 00:12:00.962 10247.447 - 10307.025: 14.2926% ( 190) 00:12:00.962 10307.025 - 10366.604: 16.6411% ( 245) 00:12:00.962 10366.604 - 10426.182: 19.2389% ( 271) 00:12:00.962 10426.182 - 10485.760: 21.9325% ( 281) 00:12:00.962 10485.760 - 10545.338: 25.0096% ( 321) 00:12:00.962 10545.338 - 10604.916: 28.2784% ( 341) 00:12:00.962 10604.916 - 10664.495: 31.4896% ( 335) 00:12:00.962 10664.495 - 10724.073: 34.4613% ( 310) 00:12:00.962 10724.073 - 10783.651: 37.2220% ( 288) 00:12:00.962 10783.651 - 10843.229: 39.8198% ( 271) 00:12:00.962 10843.229 - 10902.807: 42.2163% ( 250) 00:12:00.962 10902.807 - 10962.385: 44.0951% ( 196) 00:12:00.962 10962.385 - 11021.964: 45.7535% ( 173) 00:12:00.962 11021.964 - 11081.542: 46.9421% ( 124) 00:12:00.962 11081.542 - 11141.120: 48.0157% ( 112) 00:12:00.962 11141.120 - 11200.698: 49.0031% ( 103) 00:12:00.962 11200.698 - 11260.276: 50.0383% ( 108) 00:12:00.962 11260.276 - 11319.855: 51.1024% ( 111) 00:12:00.962 11319.855 - 11379.433: 52.0514% ( 99) 00:12:00.962 11379.433 - 11439.011: 52.7991% ( 78) 00:12:00.962 11439.011 - 11498.589: 53.7193% ( 96) 00:12:00.962 11498.589 - 11558.167: 54.5629% ( 88) 00:12:00.962 11558.167 - 11617.745: 55.1860% ( 65) 00:12:00.962 11617.745 - 11677.324: 56.1446% ( 100) 00:12:00.962 11677.324 - 11736.902: 56.6910% ( 57) 00:12:00.962 11736.902 - 11796.480: 57.3332% ( 67) 00:12:00.962 11796.480 - 11856.058: 57.9179% ( 61) 00:12:00.962 11856.058 - 11915.636: 58.4835% ( 59) 00:12:00.962 11915.636 - 11975.215: 59.0107% ( 55) 00:12:00.962 11975.215 - 12034.793: 59.7009% ( 72) 00:12:00.962 12034.793 - 12094.371: 60.7554% ( 110) 00:12:00.962 12094.371 - 12153.949: 61.8290% ( 112) 00:12:00.962 12153.949 - 12213.527: 62.7301% ( 94) 00:12:00.963 12213.527 - 12273.105: 63.7749% ( 109) 00:12:00.963 12273.105 - 12332.684: 64.8485% ( 112) 00:12:00.963 12332.684 - 12392.262: 65.7880% ( 98) 00:12:00.963 12392.262 - 12451.840: 66.9191% ( 118) 00:12:00.963 12451.840 - 12511.418: 67.8873% ( 101) 00:12:00.963 12511.418 - 12570.996: 68.8171% ( 97) 00:12:00.963 12570.996 - 12630.575: 69.7278% ( 95) 00:12:00.963 12630.575 - 12690.153: 70.5234% ( 83) 00:12:00.963 12690.153 - 12749.731: 71.4436% ( 96) 00:12:00.963 12749.731 - 12809.309: 72.2393% ( 83) 00:12:00.963 12809.309 - 12868.887: 73.0828% ( 88) 00:12:00.963 12868.887 - 12928.465: 73.7826% ( 73) 00:12:00.963 12928.465 - 12988.044: 74.4248% ( 67) 00:12:00.963 12988.044 - 13047.622: 74.9329% ( 53) 00:12:00.963 13047.622 - 13107.200: 75.4889% ( 58) 00:12:00.963 13107.200 - 13166.778: 75.9586% ( 49) 00:12:00.963 13166.778 - 13226.356: 76.3995% ( 46) 00:12:00.963 13226.356 - 13285.935: 76.7159% ( 33) 00:12:00.963 13285.935 - 13345.513: 77.0226% ( 32) 00:12:00.963 13345.513 - 13405.091: 77.2719% ( 26) 00:12:00.963 13405.091 - 13464.669: 77.5115% ( 25) 00:12:00.963 13464.669 - 13524.247: 77.7799% ( 28) 00:12:00.963 13524.247 - 13583.825: 78.1538% ( 39) 00:12:00.963 13583.825 - 13643.404: 78.4126% ( 27) 00:12:00.963 13643.404 - 13702.982: 78.6426% ( 24) 00:12:00.963 13702.982 - 13762.560: 78.8248% ( 19) 00:12:00.963 13762.560 - 13822.138: 78.9781% ( 16) 00:12:00.963 13822.138 - 13881.716: 79.1123% ( 14) 00:12:00.963 13881.716 - 13941.295: 79.1890% ( 8) 00:12:00.963 13941.295 - 14000.873: 79.2465% ( 6) 00:12:00.963 14000.873 - 14060.451: 79.3137% ( 7) 00:12:00.963 14060.451 - 14120.029: 79.3903% ( 8) 00:12:00.963 14120.029 - 14179.607: 79.5341% ( 15) 00:12:00.963 14179.607 - 14239.185: 79.6779% ( 15) 00:12:00.963 14239.185 - 14298.764: 79.7738% ( 10) 00:12:00.963 14298.764 - 14358.342: 79.8505% ( 8) 00:12:00.963 14358.342 - 14417.920: 79.8984% ( 5) 00:12:00.963 14417.920 - 14477.498: 79.9463% ( 5) 00:12:00.963 14477.498 - 14537.076: 79.9847% ( 4) 00:12:00.963 14537.076 - 14596.655: 80.0230% ( 4) 00:12:00.963 14596.655 - 14656.233: 80.0805% ( 6) 00:12:00.963 14656.233 - 14715.811: 80.1189% ( 4) 00:12:00.963 14715.811 - 14775.389: 80.1476% ( 3) 00:12:00.963 14775.389 - 14834.967: 80.1860% ( 4) 00:12:00.963 14834.967 - 14894.545: 80.2818% ( 10) 00:12:00.963 14894.545 - 14954.124: 80.4544% ( 18) 00:12:00.963 14954.124 - 15013.702: 80.6748% ( 23) 00:12:00.963 15013.702 - 15073.280: 80.9049% ( 24) 00:12:00.963 15073.280 - 15132.858: 81.2021% ( 31) 00:12:00.963 15132.858 - 15192.436: 81.5567% ( 37) 00:12:00.963 15192.436 - 15252.015: 82.0840% ( 55) 00:12:00.963 15252.015 - 15371.171: 83.2343% ( 120) 00:12:00.963 15371.171 - 15490.327: 84.7968% ( 163) 00:12:00.963 15490.327 - 15609.484: 86.6181% ( 190) 00:12:00.963 15609.484 - 15728.640: 88.5736% ( 204) 00:12:00.963 15728.640 - 15847.796: 90.2799% ( 178) 00:12:00.963 15847.796 - 15966.953: 91.8808% ( 167) 00:12:00.963 15966.953 - 16086.109: 93.0694% ( 124) 00:12:00.963 16086.109 - 16205.265: 94.2293% ( 121) 00:12:00.963 16205.265 - 16324.422: 95.3604% ( 118) 00:12:00.963 16324.422 - 16443.578: 96.1656% ( 84) 00:12:00.963 16443.578 - 16562.735: 96.9900% ( 86) 00:12:00.963 16562.735 - 16681.891: 97.5460% ( 58) 00:12:00.963 16681.891 - 16801.047: 97.9965% ( 47) 00:12:00.963 16801.047 - 16920.204: 98.2362% ( 25) 00:12:00.963 16920.204 - 17039.360: 98.3896% ( 16) 00:12:00.963 17039.360 - 17158.516: 98.4854% ( 10) 00:12:00.963 17158.516 - 17277.673: 98.5717% ( 9) 00:12:00.963 17277.673 - 17396.829: 98.6292% ( 6) 00:12:00.963 17396.829 - 17515.985: 98.6867% ( 6) 00:12:00.963 17515.985 - 17635.142: 98.7347% ( 5) 00:12:00.963 17635.142 - 17754.298: 98.7730% ( 4) 00:12:00.963 28597.527 - 28716.684: 98.7826% ( 1) 00:12:00.963 28716.684 - 28835.840: 98.8113% ( 3) 00:12:00.963 28835.840 - 28954.996: 98.8497% ( 4) 00:12:00.963 28954.996 - 29074.153: 98.8880% ( 4) 00:12:00.963 29074.153 - 29193.309: 98.9168% ( 3) 00:12:00.963 29193.309 - 29312.465: 98.9456% ( 3) 00:12:00.963 29312.465 - 29431.622: 98.9839% ( 4) 00:12:00.963 29431.622 - 29550.778: 99.0222% ( 4) 00:12:00.963 29550.778 - 29669.935: 99.0606% ( 4) 00:12:00.963 29669.935 - 29789.091: 99.0893% ( 3) 00:12:00.963 29789.091 - 29908.247: 99.1277% ( 4) 00:12:00.963 29908.247 - 30027.404: 99.1660% ( 4) 00:12:00.963 30027.404 - 30146.560: 99.1948% ( 3) 00:12:00.963 30146.560 - 30265.716: 99.2235% ( 3) 00:12:00.963 30265.716 - 30384.873: 99.2619% ( 4) 00:12:00.963 30384.873 - 30504.029: 99.3002% ( 4) 00:12:00.963 30504.029 - 30742.342: 99.3577% ( 6) 00:12:00.963 30742.342 - 30980.655: 99.3865% ( 3) 00:12:00.963 36223.535 - 36461.847: 99.3961% ( 1) 00:12:00.963 36461.847 - 36700.160: 99.4728% ( 8) 00:12:00.963 36700.160 - 36938.473: 99.5399% ( 7) 00:12:00.963 36938.473 - 37176.785: 99.5878% ( 5) 00:12:00.963 37176.785 - 37415.098: 99.6645% ( 8) 00:12:00.963 37415.098 - 37653.411: 99.7124% ( 5) 00:12:00.963 37653.411 - 37891.724: 99.7795% ( 7) 00:12:00.963 37891.724 - 38130.036: 99.8466% ( 7) 00:12:00.963 38130.036 - 38368.349: 99.9233% ( 8) 00:12:00.963 38368.349 - 38606.662: 99.9904% ( 7) 00:12:00.963 38606.662 - 38844.975: 100.0000% ( 1) 00:12:00.963 00:12:00.963 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:12:00.963 ============================================================================== 00:12:00.963 Range in us Cumulative IO count 00:12:00.963 8400.524 - 8460.102: 0.0096% ( 1) 00:12:00.963 8519.680 - 8579.258: 0.0288% ( 2) 00:12:00.963 8579.258 - 8638.836: 0.0671% ( 4) 00:12:00.963 8638.836 - 8698.415: 0.1054% ( 4) 00:12:00.963 8698.415 - 8757.993: 0.1725% ( 7) 00:12:00.963 8757.993 - 8817.571: 0.5176% ( 36) 00:12:00.963 8817.571 - 8877.149: 0.5464% ( 3) 00:12:00.963 8877.149 - 8936.727: 0.5752% ( 3) 00:12:00.963 8936.727 - 8996.305: 0.6039% ( 3) 00:12:00.963 8996.305 - 9055.884: 0.6135% ( 1) 00:12:00.963 9055.884 - 9115.462: 0.6518% ( 4) 00:12:00.963 9115.462 - 9175.040: 0.6998% ( 5) 00:12:00.963 9175.040 - 9234.618: 0.7765% ( 8) 00:12:00.963 9234.618 - 9294.196: 0.8915% ( 12) 00:12:00.963 9294.196 - 9353.775: 1.0736% ( 19) 00:12:00.963 9353.775 - 9413.353: 1.3995% ( 34) 00:12:00.963 9413.353 - 9472.931: 1.5913% ( 20) 00:12:00.963 9472.931 - 9532.509: 1.7542% ( 17) 00:12:00.963 9532.509 - 9592.087: 2.0035% ( 26) 00:12:00.963 9592.087 - 9651.665: 2.2623% ( 27) 00:12:00.963 9651.665 - 9711.244: 2.6649% ( 42) 00:12:00.963 9711.244 - 9770.822: 3.1442% ( 50) 00:12:00.963 9770.822 - 9830.400: 4.0548% ( 95) 00:12:00.963 9830.400 - 9889.978: 4.7738% ( 75) 00:12:00.963 9889.978 - 9949.556: 5.5886% ( 85) 00:12:00.963 9949.556 - 10009.135: 6.6622% ( 112) 00:12:00.963 10009.135 - 10068.713: 8.0138% ( 141) 00:12:00.963 10068.713 - 10128.291: 9.8351% ( 190) 00:12:00.963 10128.291 - 10187.869: 11.1484% ( 137) 00:12:00.963 10187.869 - 10247.447: 12.8547% ( 178) 00:12:00.963 10247.447 - 10307.025: 14.4843% ( 170) 00:12:00.963 10307.025 - 10366.604: 16.4877% ( 209) 00:12:00.963 10366.604 - 10426.182: 18.6733% ( 228) 00:12:00.963 10426.182 - 10485.760: 22.1434% ( 362) 00:12:00.963 10485.760 - 10545.338: 24.9521% ( 293) 00:12:00.963 10545.338 - 10604.916: 28.1729% ( 336) 00:12:00.963 10604.916 - 10664.495: 31.9018% ( 389) 00:12:00.963 10664.495 - 10724.073: 34.7584% ( 298) 00:12:00.963 10724.073 - 10783.651: 37.6246% ( 299) 00:12:00.963 10783.651 - 10843.229: 39.9061% ( 238) 00:12:00.963 10843.229 - 10902.807: 41.9574% ( 214) 00:12:00.963 10902.807 - 10962.385: 43.6925% ( 181) 00:12:00.963 10962.385 - 11021.964: 45.1591% ( 153) 00:12:00.963 11021.964 - 11081.542: 46.6066% ( 151) 00:12:00.963 11081.542 - 11141.120: 48.0637% ( 152) 00:12:00.963 11141.120 - 11200.698: 49.2811% ( 127) 00:12:00.963 11200.698 - 11260.276: 50.4985% ( 127) 00:12:00.963 11260.276 - 11319.855: 51.6584% ( 121) 00:12:00.963 11319.855 - 11379.433: 52.8854% ( 128) 00:12:00.963 11379.433 - 11439.011: 53.6810% ( 83) 00:12:00.963 11439.011 - 11498.589: 54.6587% ( 102) 00:12:00.964 11498.589 - 11558.167: 55.4640% ( 84) 00:12:00.964 11558.167 - 11617.745: 56.0199% ( 58) 00:12:00.964 11617.745 - 11677.324: 56.6814% ( 69) 00:12:00.964 11677.324 - 11736.902: 57.3332% ( 68) 00:12:00.964 11736.902 - 11796.480: 57.7071% ( 39) 00:12:00.964 11796.480 - 11856.058: 58.1288% ( 44) 00:12:00.964 11856.058 - 11915.636: 58.6273% ( 52) 00:12:00.964 11915.636 - 11975.215: 59.2983% ( 70) 00:12:00.964 11975.215 - 12034.793: 59.8543% ( 58) 00:12:00.964 12034.793 - 12094.371: 60.4582% ( 63) 00:12:00.964 12094.371 - 12153.949: 61.6660% ( 126) 00:12:00.964 12153.949 - 12213.527: 62.8451% ( 123) 00:12:00.964 12213.527 - 12273.105: 63.6503% ( 84) 00:12:00.964 12273.105 - 12332.684: 64.4268% ( 81) 00:12:00.964 12332.684 - 12392.262: 65.2224% ( 83) 00:12:00.964 12392.262 - 12451.840: 66.0084% ( 82) 00:12:00.964 12451.840 - 12511.418: 67.1683% ( 121) 00:12:00.964 12511.418 - 12570.996: 68.0598% ( 93) 00:12:00.964 12570.996 - 12630.575: 69.1238% ( 111) 00:12:00.964 12630.575 - 12690.153: 69.9482% ( 86) 00:12:00.964 12690.153 - 12749.731: 70.6959% ( 78) 00:12:00.964 12749.731 - 12809.309: 71.5012% ( 84) 00:12:00.964 12809.309 - 12868.887: 72.2872% ( 82) 00:12:00.964 12868.887 - 12928.465: 73.0637% ( 81) 00:12:00.964 12928.465 - 12988.044: 73.6676% ( 63) 00:12:00.964 12988.044 - 13047.622: 74.1181% ( 47) 00:12:00.964 13047.622 - 13107.200: 74.6262% ( 53) 00:12:00.964 13107.200 - 13166.778: 75.0479% ( 44) 00:12:00.964 13166.778 - 13226.356: 75.5272% ( 50) 00:12:00.964 13226.356 - 13285.935: 75.8723% ( 36) 00:12:00.964 13285.935 - 13345.513: 76.2653% ( 41) 00:12:00.964 13345.513 - 13405.091: 76.6104% ( 36) 00:12:00.964 13405.091 - 13464.669: 76.9076% ( 31) 00:12:00.964 13464.669 - 13524.247: 77.1856% ( 29) 00:12:00.964 13524.247 - 13583.825: 77.5307% ( 36) 00:12:00.964 13583.825 - 13643.404: 77.8183% ( 30) 00:12:00.964 13643.404 - 13702.982: 78.0100% ( 20) 00:12:00.964 13702.982 - 13762.560: 78.2688% ( 27) 00:12:00.964 13762.560 - 13822.138: 78.4701% ( 21) 00:12:00.964 13822.138 - 13881.716: 78.7385% ( 28) 00:12:00.964 13881.716 - 13941.295: 79.0357% ( 31) 00:12:00.964 13941.295 - 14000.873: 79.3999% ( 38) 00:12:00.964 14000.873 - 14060.451: 79.6108% ( 22) 00:12:00.964 14060.451 - 14120.029: 79.8505% ( 25) 00:12:00.964 14120.029 - 14179.607: 79.9942% ( 15) 00:12:00.964 14179.607 - 14239.185: 80.1189% ( 13) 00:12:00.964 14239.185 - 14298.764: 80.1956% ( 8) 00:12:00.964 14298.764 - 14358.342: 80.2435% ( 5) 00:12:00.964 14358.342 - 14417.920: 80.2818% ( 4) 00:12:00.964 14417.920 - 14477.498: 80.3106% ( 3) 00:12:00.964 14477.498 - 14537.076: 80.3298% ( 2) 00:12:00.964 14537.076 - 14596.655: 80.3585% ( 3) 00:12:00.964 14596.655 - 14656.233: 80.3681% ( 1) 00:12:00.964 14656.233 - 14715.811: 80.3777% ( 1) 00:12:00.964 14775.389 - 14834.967: 80.4160% ( 4) 00:12:00.964 14834.967 - 14894.545: 80.5023% ( 9) 00:12:00.964 14894.545 - 14954.124: 80.7228% ( 23) 00:12:00.964 14954.124 - 15013.702: 81.0583% ( 35) 00:12:00.964 15013.702 - 15073.280: 81.4225% ( 38) 00:12:00.964 15073.280 - 15132.858: 81.7197% ( 31) 00:12:00.964 15132.858 - 15192.436: 82.1127% ( 41) 00:12:00.964 15192.436 - 15252.015: 82.5729% ( 48) 00:12:00.964 15252.015 - 15371.171: 83.6273% ( 110) 00:12:00.964 15371.171 - 15490.327: 84.7872% ( 121) 00:12:00.964 15490.327 - 15609.484: 86.4647% ( 175) 00:12:00.964 15609.484 - 15728.640: 88.2765% ( 189) 00:12:00.964 15728.640 - 15847.796: 90.2320% ( 204) 00:12:00.964 15847.796 - 15966.953: 91.8424% ( 168) 00:12:00.964 15966.953 - 16086.109: 93.3857% ( 161) 00:12:00.964 16086.109 - 16205.265: 94.6607% ( 133) 00:12:00.964 16205.265 - 16324.422: 95.8397% ( 123) 00:12:00.964 16324.422 - 16443.578: 96.8942% ( 110) 00:12:00.964 16443.578 - 16562.735: 97.5556% ( 69) 00:12:00.964 16562.735 - 16681.891: 98.0637% ( 53) 00:12:00.964 16681.891 - 16801.047: 98.3416% ( 29) 00:12:00.964 16801.047 - 16920.204: 98.5813% ( 25) 00:12:00.964 16920.204 - 17039.360: 98.7155% ( 14) 00:12:00.964 17039.360 - 17158.516: 98.7730% ( 6) 00:12:00.964 27048.495 - 27167.651: 98.7826% ( 1) 00:12:00.964 27167.651 - 27286.807: 98.8209% ( 4) 00:12:00.964 27286.807 - 27405.964: 98.8497% ( 3) 00:12:00.964 27405.964 - 27525.120: 98.8785% ( 3) 00:12:00.964 27525.120 - 27644.276: 98.9072% ( 3) 00:12:00.964 27644.276 - 27763.433: 98.9360% ( 3) 00:12:00.964 27763.433 - 27882.589: 98.9647% ( 3) 00:12:00.964 27882.589 - 28001.745: 98.9935% ( 3) 00:12:00.964 28001.745 - 28120.902: 99.0222% ( 3) 00:12:00.964 28120.902 - 28240.058: 99.0510% ( 3) 00:12:00.964 28240.058 - 28359.215: 99.0798% ( 3) 00:12:00.964 28359.215 - 28478.371: 99.1181% ( 4) 00:12:00.964 28478.371 - 28597.527: 99.1469% ( 3) 00:12:00.964 28597.527 - 28716.684: 99.1852% ( 4) 00:12:00.964 28716.684 - 28835.840: 99.2140% ( 3) 00:12:00.964 28835.840 - 28954.996: 99.2427% ( 3) 00:12:00.964 28954.996 - 29074.153: 99.2715% ( 3) 00:12:00.964 29074.153 - 29193.309: 99.3002% ( 3) 00:12:00.964 29193.309 - 29312.465: 99.3386% ( 4) 00:12:00.964 29312.465 - 29431.622: 99.3769% ( 4) 00:12:00.964 29431.622 - 29550.778: 99.3865% ( 1) 00:12:00.964 34793.658 - 35031.971: 99.4248% ( 4) 00:12:00.964 35031.971 - 35270.284: 99.4824% ( 6) 00:12:00.964 35270.284 - 35508.596: 99.5399% ( 6) 00:12:00.964 35508.596 - 35746.909: 99.6070% ( 7) 00:12:00.964 35746.909 - 35985.222: 99.6645% ( 6) 00:12:00.964 35985.222 - 36223.535: 99.7316% ( 7) 00:12:00.964 36223.535 - 36461.847: 99.7987% ( 7) 00:12:00.964 36461.847 - 36700.160: 99.8658% ( 7) 00:12:00.964 36700.160 - 36938.473: 99.9329% ( 7) 00:12:00.964 36938.473 - 37176.785: 100.0000% ( 7) 00:12:00.964 00:12:00.964 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:12:00.964 ============================================================================== 00:12:00.964 Range in us Cumulative IO count 00:12:00.964 8460.102 - 8519.680: 0.0096% ( 1) 00:12:00.964 8579.258 - 8638.836: 0.0192% ( 1) 00:12:00.964 8638.836 - 8698.415: 0.0575% ( 4) 00:12:00.964 8698.415 - 8757.993: 0.0959% ( 4) 00:12:00.964 8757.993 - 8817.571: 0.1630% ( 7) 00:12:00.964 8817.571 - 8877.149: 0.2301% ( 7) 00:12:00.964 8877.149 - 8936.727: 0.3930% ( 17) 00:12:00.964 8936.727 - 8996.305: 0.4505% ( 6) 00:12:00.964 8996.305 - 9055.884: 0.5081% ( 6) 00:12:00.964 9055.884 - 9115.462: 0.5656% ( 6) 00:12:00.964 9115.462 - 9175.040: 0.6327% ( 7) 00:12:00.964 9175.040 - 9234.618: 0.7477% ( 12) 00:12:00.964 9234.618 - 9294.196: 0.8531% ( 11) 00:12:00.964 9294.196 - 9353.775: 1.0161% ( 17) 00:12:00.964 9353.775 - 9413.353: 1.3516% ( 35) 00:12:00.964 9413.353 - 9472.931: 1.6775% ( 34) 00:12:00.964 9472.931 - 9532.509: 1.9651% ( 30) 00:12:00.964 9532.509 - 9592.087: 2.2431% ( 29) 00:12:00.964 9592.087 - 9651.665: 2.6169% ( 39) 00:12:00.964 9651.665 - 9711.244: 3.0483% ( 45) 00:12:00.964 9711.244 - 9770.822: 3.6906% ( 67) 00:12:00.964 9770.822 - 9830.400: 4.3903% ( 73) 00:12:00.964 9830.400 - 9889.978: 5.3010% ( 95) 00:12:00.964 9889.978 - 9949.556: 6.1541% ( 89) 00:12:00.964 9949.556 - 10009.135: 7.0744% ( 96) 00:12:00.964 10009.135 - 10068.713: 8.2439% ( 122) 00:12:00.964 10068.713 - 10128.291: 9.2791% ( 108) 00:12:00.964 10128.291 - 10187.869: 10.5253% ( 130) 00:12:00.964 10187.869 - 10247.447: 11.7906% ( 132) 00:12:00.964 10247.447 - 10307.025: 13.7941% ( 209) 00:12:00.964 10307.025 - 10366.604: 16.0468% ( 235) 00:12:00.964 10366.604 - 10426.182: 18.7212% ( 279) 00:12:00.964 10426.182 - 10485.760: 21.8367% ( 325) 00:12:00.964 10485.760 - 10545.338: 25.2109% ( 352) 00:12:00.964 10545.338 - 10604.916: 28.3167% ( 324) 00:12:00.965 10604.916 - 10664.495: 31.4896% ( 331) 00:12:00.965 10664.495 - 10724.073: 34.2120% ( 284) 00:12:00.965 10724.073 - 10783.651: 36.8002% ( 270) 00:12:00.965 10783.651 - 10843.229: 39.0817% ( 238) 00:12:00.965 10843.229 - 10902.807: 41.3919% ( 241) 00:12:00.965 10902.807 - 10962.385: 43.0502% ( 173) 00:12:00.965 10962.385 - 11021.964: 44.5840% ( 160) 00:12:00.965 11021.964 - 11081.542: 46.0506% ( 153) 00:12:00.965 11081.542 - 11141.120: 47.2872% ( 129) 00:12:00.965 11141.120 - 11200.698: 48.6196% ( 139) 00:12:00.965 11200.698 - 11260.276: 50.1054% ( 155) 00:12:00.965 11260.276 - 11319.855: 51.0928% ( 103) 00:12:00.965 11319.855 - 11379.433: 52.1568% ( 111) 00:12:00.965 11379.433 - 11439.011: 53.3359% ( 123) 00:12:00.965 11439.011 - 11498.589: 54.2849% ( 99) 00:12:00.965 11498.589 - 11558.167: 55.1956% ( 95) 00:12:00.965 11558.167 - 11617.745: 55.9720% ( 81) 00:12:00.965 11617.745 - 11677.324: 56.5855% ( 64) 00:12:00.965 11677.324 - 11736.902: 57.1415% ( 58) 00:12:00.965 11736.902 - 11796.480: 57.5729% ( 45) 00:12:00.965 11796.480 - 11856.058: 58.0905% ( 54) 00:12:00.965 11856.058 - 11915.636: 58.5890% ( 52) 00:12:00.965 11915.636 - 11975.215: 59.2408% ( 68) 00:12:00.965 11975.215 - 12034.793: 59.8639% ( 65) 00:12:00.965 12034.793 - 12094.371: 60.4965% ( 66) 00:12:00.965 12094.371 - 12153.949: 61.1867% ( 72) 00:12:00.965 12153.949 - 12213.527: 62.3562% ( 122) 00:12:00.965 12213.527 - 12273.105: 63.1231% ( 80) 00:12:00.965 12273.105 - 12332.684: 64.1296% ( 105) 00:12:00.965 12332.684 - 12392.262: 65.0115% ( 92) 00:12:00.965 12392.262 - 12451.840: 65.8455% ( 87) 00:12:00.965 12451.840 - 12511.418: 66.6986% ( 89) 00:12:00.965 12511.418 - 12570.996: 67.6668% ( 101) 00:12:00.965 12570.996 - 12630.575: 68.7308% ( 111) 00:12:00.965 12630.575 - 12690.153: 69.8236% ( 114) 00:12:00.965 12690.153 - 12749.731: 70.7343% ( 95) 00:12:00.965 12749.731 - 12809.309: 71.6641% ( 97) 00:12:00.965 12809.309 - 12868.887: 72.4981% ( 87) 00:12:00.965 12868.887 - 12928.465: 73.2745% ( 81) 00:12:00.965 12928.465 - 12988.044: 74.0127% ( 77) 00:12:00.965 12988.044 - 13047.622: 74.7795% ( 80) 00:12:00.965 13047.622 - 13107.200: 75.3834% ( 63) 00:12:00.965 13107.200 - 13166.778: 75.8148% ( 45) 00:12:00.965 13166.778 - 13226.356: 76.1887% ( 39) 00:12:00.965 13226.356 - 13285.935: 76.5817% ( 41) 00:12:00.965 13285.935 - 13345.513: 77.0035% ( 44) 00:12:00.965 13345.513 - 13405.091: 77.3677% ( 38) 00:12:00.965 13405.091 - 13464.669: 77.6840% ( 33) 00:12:00.965 13464.669 - 13524.247: 77.9045% ( 23) 00:12:00.965 13524.247 - 13583.825: 78.0962% ( 20) 00:12:00.965 13583.825 - 13643.404: 78.2209% ( 13) 00:12:00.965 13643.404 - 13702.982: 78.3263% ( 11) 00:12:00.965 13702.982 - 13762.560: 78.4126% ( 9) 00:12:00.965 13762.560 - 13822.138: 78.4988% ( 9) 00:12:00.965 13822.138 - 13881.716: 78.5755% ( 8) 00:12:00.965 13881.716 - 13941.295: 78.6618% ( 9) 00:12:00.965 13941.295 - 14000.873: 78.7385% ( 8) 00:12:00.965 14000.873 - 14060.451: 78.7768% ( 4) 00:12:00.965 14060.451 - 14120.029: 78.8248% ( 5) 00:12:00.965 14120.029 - 14179.607: 78.8823% ( 6) 00:12:00.965 14179.607 - 14239.185: 79.0740% ( 20) 00:12:00.965 14239.185 - 14298.764: 79.2178% ( 15) 00:12:00.965 14298.764 - 14358.342: 79.3999% ( 19) 00:12:00.965 14358.342 - 14417.920: 79.5821% ( 19) 00:12:00.965 14417.920 - 14477.498: 79.7738% ( 20) 00:12:00.965 14477.498 - 14537.076: 79.9655% ( 20) 00:12:00.965 14537.076 - 14596.655: 80.1668% ( 21) 00:12:00.965 14596.655 - 14656.233: 80.2914% ( 13) 00:12:00.965 14656.233 - 14715.811: 80.4640% ( 18) 00:12:00.965 14715.811 - 14775.389: 80.6365% ( 18) 00:12:00.965 14775.389 - 14834.967: 80.8570% ( 23) 00:12:00.965 14834.967 - 14894.545: 81.0008% ( 15) 00:12:00.965 14894.545 - 14954.124: 81.1446% ( 15) 00:12:00.965 14954.124 - 15013.702: 81.3650% ( 23) 00:12:00.965 15013.702 - 15073.280: 81.7485% ( 40) 00:12:00.965 15073.280 - 15132.858: 82.1031% ( 37) 00:12:00.965 15132.858 - 15192.436: 82.4482% ( 36) 00:12:00.965 15192.436 - 15252.015: 82.7837% ( 35) 00:12:00.965 15252.015 - 15371.171: 83.7327% ( 99) 00:12:00.965 15371.171 - 15490.327: 85.1802% ( 151) 00:12:00.965 15490.327 - 15609.484: 86.9728% ( 187) 00:12:00.965 15609.484 - 15728.640: 88.6503% ( 175) 00:12:00.965 15728.640 - 15847.796: 90.1649% ( 158) 00:12:00.965 15847.796 - 15966.953: 91.9574% ( 187) 00:12:00.965 15966.953 - 16086.109: 93.3282% ( 143) 00:12:00.965 16086.109 - 16205.265: 94.4018% ( 112) 00:12:00.965 16205.265 - 16324.422: 95.6001% ( 125) 00:12:00.965 16324.422 - 16443.578: 96.7408% ( 119) 00:12:00.965 16443.578 - 16562.735: 97.4693% ( 76) 00:12:00.965 16562.735 - 16681.891: 97.9774% ( 53) 00:12:00.965 16681.891 - 16801.047: 98.3033% ( 34) 00:12:00.965 16801.047 - 16920.204: 98.5813% ( 29) 00:12:00.965 16920.204 - 17039.360: 98.7347% ( 16) 00:12:00.965 17039.360 - 17158.516: 98.7730% ( 4) 00:12:00.965 26214.400 - 26333.556: 98.8689% ( 10) 00:12:00.965 26333.556 - 26452.713: 98.9839% ( 12) 00:12:00.965 26452.713 - 26571.869: 99.0127% ( 3) 00:12:00.965 26571.869 - 26691.025: 99.0414% ( 3) 00:12:00.965 26691.025 - 26810.182: 99.0702% ( 3) 00:12:00.965 26810.182 - 26929.338: 99.1085% ( 4) 00:12:00.965 26929.338 - 27048.495: 99.1373% ( 3) 00:12:00.965 27048.495 - 27167.651: 99.1660% ( 3) 00:12:00.965 27167.651 - 27286.807: 99.1948% ( 3) 00:12:00.965 27286.807 - 27405.964: 99.2235% ( 3) 00:12:00.965 27405.964 - 27525.120: 99.2523% ( 3) 00:12:00.965 27525.120 - 27644.276: 99.2906% ( 4) 00:12:00.965 27644.276 - 27763.433: 99.3194% ( 3) 00:12:00.965 27763.433 - 27882.589: 99.3482% ( 3) 00:12:00.965 27882.589 - 28001.745: 99.3769% ( 3) 00:12:00.965 28001.745 - 28120.902: 99.3865% ( 1) 00:12:00.965 31218.967 - 31457.280: 99.5207% ( 14) 00:12:00.965 31457.280 - 31695.593: 99.6549% ( 14) 00:12:00.965 31695.593 - 31933.905: 99.7891% ( 14) 00:12:00.965 33125.469 - 33363.782: 99.7987% ( 1) 00:12:00.965 33363.782 - 33602.095: 99.8562% ( 6) 00:12:00.965 33602.095 - 33840.407: 99.9041% ( 5) 00:12:00.965 33840.407 - 34078.720: 99.9712% ( 7) 00:12:00.965 34078.720 - 34317.033: 100.0000% ( 3) 00:12:00.965 00:12:00.965 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:12:00.965 ============================================================================== 00:12:00.965 Range in us Cumulative IO count 00:12:00.965 8281.367 - 8340.945: 0.0096% ( 1) 00:12:00.965 8519.680 - 8579.258: 0.0288% ( 2) 00:12:00.965 8579.258 - 8638.836: 0.0671% ( 4) 00:12:00.965 8638.836 - 8698.415: 0.1438% ( 8) 00:12:00.965 8698.415 - 8757.993: 0.2109% ( 7) 00:12:00.965 8757.993 - 8817.571: 0.4218% ( 22) 00:12:00.965 8817.571 - 8877.149: 0.4889% ( 7) 00:12:00.965 8877.149 - 8936.727: 0.5368% ( 5) 00:12:00.965 8936.727 - 8996.305: 0.5847% ( 5) 00:12:00.965 8996.305 - 9055.884: 0.6135% ( 3) 00:12:00.965 9115.462 - 9175.040: 0.6710% ( 6) 00:12:00.965 9175.040 - 9234.618: 0.7477% ( 8) 00:12:00.965 9234.618 - 9294.196: 0.8627% ( 12) 00:12:00.965 9294.196 - 9353.775: 1.0736% ( 22) 00:12:00.965 9353.775 - 9413.353: 1.2078% ( 14) 00:12:00.965 9413.353 - 9472.931: 1.3995% ( 20) 00:12:00.965 9472.931 - 9532.509: 1.7734% ( 39) 00:12:00.965 9532.509 - 9592.087: 2.0035% ( 24) 00:12:00.966 9592.087 - 9651.665: 2.3102% ( 32) 00:12:00.966 9651.665 - 9711.244: 2.8374% ( 55) 00:12:00.966 9711.244 - 9770.822: 3.3071% ( 49) 00:12:00.966 9770.822 - 9830.400: 3.9206% ( 64) 00:12:00.966 9830.400 - 9889.978: 4.8984% ( 102) 00:12:00.966 9889.978 - 9949.556: 6.1925% ( 135) 00:12:00.966 9949.556 - 10009.135: 7.0360% ( 88) 00:12:00.966 10009.135 - 10068.713: 8.0234% ( 103) 00:12:00.966 10068.713 - 10128.291: 9.1929% ( 122) 00:12:00.966 10128.291 - 10187.869: 10.5253% ( 139) 00:12:00.966 10187.869 - 10247.447: 12.1453% ( 169) 00:12:00.966 10247.447 - 10307.025: 14.1967% ( 214) 00:12:00.966 10307.025 - 10366.604: 16.6794% ( 259) 00:12:00.966 10366.604 - 10426.182: 19.6415% ( 309) 00:12:00.966 10426.182 - 10485.760: 22.3351% ( 281) 00:12:00.966 10485.760 - 10545.338: 26.1024% ( 393) 00:12:00.966 10545.338 - 10604.916: 28.9110% ( 293) 00:12:00.966 10604.916 - 10664.495: 31.4321% ( 263) 00:12:00.966 10664.495 - 10724.073: 33.8669% ( 254) 00:12:00.966 10724.073 - 10783.651: 36.4839% ( 273) 00:12:00.966 10783.651 - 10843.229: 38.6407% ( 225) 00:12:00.966 10843.229 - 10902.807: 40.7880% ( 224) 00:12:00.966 10902.807 - 10962.385: 42.9064% ( 221) 00:12:00.966 10962.385 - 11021.964: 44.5648% ( 173) 00:12:00.966 11021.964 - 11081.542: 46.3382% ( 185) 00:12:00.966 11081.542 - 11141.120: 47.7665% ( 149) 00:12:00.966 11141.120 - 11200.698: 48.7251% ( 100) 00:12:00.966 11200.698 - 11260.276: 49.9904% ( 132) 00:12:00.966 11260.276 - 11319.855: 50.9394% ( 99) 00:12:00.966 11319.855 - 11379.433: 51.9076% ( 101) 00:12:00.966 11379.433 - 11439.011: 53.2017% ( 135) 00:12:00.966 11439.011 - 11498.589: 54.1028% ( 94) 00:12:00.966 11498.589 - 11558.167: 54.7929% ( 72) 00:12:00.966 11558.167 - 11617.745: 55.4352% ( 67) 00:12:00.966 11617.745 - 11677.324: 56.1829% ( 78) 00:12:00.966 11677.324 - 11736.902: 56.8923% ( 74) 00:12:00.966 11736.902 - 11796.480: 57.4099% ( 54) 00:12:00.966 11796.480 - 11856.058: 57.9371% ( 55) 00:12:00.966 11856.058 - 11915.636: 58.5410% ( 63) 00:12:00.966 11915.636 - 11975.215: 59.0203% ( 50) 00:12:00.966 11975.215 - 12034.793: 59.6530% ( 66) 00:12:00.966 12034.793 - 12094.371: 60.4870% ( 87) 00:12:00.966 12094.371 - 12153.949: 61.4743% ( 103) 00:12:00.966 12153.949 - 12213.527: 62.5767% ( 115) 00:12:00.966 12213.527 - 12273.105: 63.3531% ( 81) 00:12:00.966 12273.105 - 12332.684: 64.1008% ( 78) 00:12:00.966 12332.684 - 12392.262: 65.0403% ( 98) 00:12:00.966 12392.262 - 12451.840: 65.9509% ( 95) 00:12:00.966 12451.840 - 12511.418: 66.7657% ( 85) 00:12:00.966 12511.418 - 12570.996: 67.7243% ( 100) 00:12:00.966 12570.996 - 12630.575: 68.6925% ( 101) 00:12:00.966 12630.575 - 12690.153: 69.6894% ( 104) 00:12:00.966 12690.153 - 12749.731: 70.7726% ( 113) 00:12:00.966 12749.731 - 12809.309: 71.8079% ( 108) 00:12:00.966 12809.309 - 12868.887: 72.5268% ( 75) 00:12:00.966 12868.887 - 12928.465: 73.3992% ( 91) 00:12:00.966 12928.465 - 12988.044: 74.4824% ( 113) 00:12:00.966 12988.044 - 13047.622: 75.3067% ( 86) 00:12:00.966 13047.622 - 13107.200: 75.8723% ( 59) 00:12:00.966 13107.200 - 13166.778: 76.2845% ( 43) 00:12:00.966 13166.778 - 13226.356: 76.5721% ( 30) 00:12:00.966 13226.356 - 13285.935: 76.7542% ( 19) 00:12:00.966 13285.935 - 13345.513: 76.9843% ( 24) 00:12:00.966 13345.513 - 13405.091: 77.2143% ( 24) 00:12:00.966 13405.091 - 13464.669: 77.4444% ( 24) 00:12:00.966 13464.669 - 13524.247: 77.6553% ( 22) 00:12:00.966 13524.247 - 13583.825: 77.8278% ( 18) 00:12:00.966 13583.825 - 13643.404: 78.0387% ( 22) 00:12:00.966 13643.404 - 13702.982: 78.1250% ( 9) 00:12:00.966 13702.982 - 13762.560: 78.2113% ( 9) 00:12:00.966 13762.560 - 13822.138: 78.2975% ( 9) 00:12:00.966 13822.138 - 13881.716: 78.3838% ( 9) 00:12:00.966 13881.716 - 13941.295: 78.4605% ( 8) 00:12:00.966 13941.295 - 14000.873: 78.4893% ( 3) 00:12:00.966 14000.873 - 14060.451: 78.5276% ( 4) 00:12:00.966 14060.451 - 14120.029: 78.6714% ( 15) 00:12:00.966 14120.029 - 14179.607: 78.7673% ( 10) 00:12:00.966 14179.607 - 14239.185: 78.8344% ( 7) 00:12:00.966 14239.185 - 14298.764: 78.9398% ( 11) 00:12:00.966 14298.764 - 14358.342: 79.0740% ( 14) 00:12:00.966 14358.342 - 14417.920: 79.1986% ( 13) 00:12:00.966 14417.920 - 14477.498: 79.3424% ( 15) 00:12:00.966 14477.498 - 14537.076: 79.6396% ( 31) 00:12:00.966 14537.076 - 14596.655: 79.8217% ( 19) 00:12:00.966 14596.655 - 14656.233: 79.9751% ( 16) 00:12:00.966 14656.233 - 14715.811: 80.0997% ( 13) 00:12:00.966 14715.811 - 14775.389: 80.2722% ( 18) 00:12:00.966 14775.389 - 14834.967: 80.3873% ( 12) 00:12:00.966 14834.967 - 14894.545: 80.5119% ( 13) 00:12:00.966 14894.545 - 14954.124: 80.6844% ( 18) 00:12:00.966 14954.124 - 15013.702: 80.8953% ( 22) 00:12:00.966 15013.702 - 15073.280: 81.0870% ( 20) 00:12:00.966 15073.280 - 15132.858: 81.5472% ( 48) 00:12:00.966 15132.858 - 15192.436: 82.1415% ( 62) 00:12:00.966 15192.436 - 15252.015: 82.7646% ( 65) 00:12:00.966 15252.015 - 15371.171: 84.2983% ( 160) 00:12:00.966 15371.171 - 15490.327: 85.8800% ( 165) 00:12:00.966 15490.327 - 15609.484: 87.2412% ( 142) 00:12:00.966 15609.484 - 15728.640: 88.7174% ( 154) 00:12:00.966 15728.640 - 15847.796: 90.1553% ( 150) 00:12:00.966 15847.796 - 15966.953: 91.8328% ( 175) 00:12:00.966 15966.953 - 16086.109: 93.1748% ( 140) 00:12:00.966 16086.109 - 16205.265: 94.7757% ( 167) 00:12:00.966 16205.265 - 16324.422: 95.8589% ( 113) 00:12:00.966 16324.422 - 16443.578: 96.5491% ( 72) 00:12:00.966 16443.578 - 16562.735: 97.3064% ( 79) 00:12:00.966 16562.735 - 16681.891: 97.7473% ( 46) 00:12:00.966 16681.891 - 16801.047: 98.1403% ( 41) 00:12:00.966 16801.047 - 16920.204: 98.4663% ( 34) 00:12:00.966 16920.204 - 17039.360: 98.6388% ( 18) 00:12:00.966 17039.360 - 17158.516: 98.7251% ( 9) 00:12:00.966 17158.516 - 17277.673: 98.7634% ( 4) 00:12:00.966 17277.673 - 17396.829: 98.7730% ( 1) 00:12:00.967 22282.240 - 22401.396: 98.8018% ( 3) 00:12:00.967 22401.396 - 22520.553: 98.8401% ( 4) 00:12:00.967 22520.553 - 22639.709: 98.8689% ( 3) 00:12:00.967 22639.709 - 22758.865: 98.8976% ( 3) 00:12:00.967 22758.865 - 22878.022: 98.9264% ( 3) 00:12:00.967 22878.022 - 22997.178: 98.9647% ( 4) 00:12:00.967 22997.178 - 23116.335: 98.9935% ( 3) 00:12:00.967 23116.335 - 23235.491: 99.0222% ( 3) 00:12:00.967 23235.491 - 23354.647: 99.0510% ( 3) 00:12:00.967 23354.647 - 23473.804: 99.0798% ( 3) 00:12:00.967 23473.804 - 23592.960: 99.1085% ( 3) 00:12:00.967 23592.960 - 23712.116: 99.1373% ( 3) 00:12:00.967 23712.116 - 23831.273: 99.1756% ( 4) 00:12:00.967 23831.273 - 23950.429: 99.2044% ( 3) 00:12:00.967 23950.429 - 24069.585: 99.2427% ( 4) 00:12:00.967 24069.585 - 24188.742: 99.2619% ( 2) 00:12:00.967 24188.742 - 24307.898: 99.3002% ( 4) 00:12:00.967 24307.898 - 24427.055: 99.3290% ( 3) 00:12:00.967 24427.055 - 24546.211: 99.3673% ( 4) 00:12:00.967 24546.211 - 24665.367: 99.3865% ( 2) 00:12:00.967 30027.404 - 30146.560: 99.4153% ( 3) 00:12:00.967 30146.560 - 30265.716: 99.4536% ( 4) 00:12:00.967 30265.716 - 30384.873: 99.4824% ( 3) 00:12:00.967 30384.873 - 30504.029: 99.5111% ( 3) 00:12:00.967 30504.029 - 30742.342: 99.5782% ( 7) 00:12:00.967 30742.342 - 30980.655: 99.6357% ( 6) 00:12:00.967 30980.655 - 31218.967: 99.7028% ( 7) 00:12:00.967 31218.967 - 31457.280: 99.7699% ( 7) 00:12:00.967 31457.280 - 31695.593: 99.8370% ( 7) 00:12:00.967 31695.593 - 31933.905: 99.9137% ( 8) 00:12:00.967 31933.905 - 32172.218: 99.9808% ( 7) 00:12:00.967 32172.218 - 32410.531: 100.0000% ( 2) 00:12:00.967 00:12:00.967 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:12:00.967 ============================================================================== 00:12:00.967 Range in us Cumulative IO count 00:12:00.967 8519.680 - 8579.258: 0.0096% ( 1) 00:12:00.967 8579.258 - 8638.836: 0.0383% ( 3) 00:12:00.967 8638.836 - 8698.415: 0.0767% ( 4) 00:12:00.967 8698.415 - 8757.993: 0.1054% ( 3) 00:12:00.967 8757.993 - 8817.571: 0.4026% ( 31) 00:12:00.967 8817.571 - 8877.149: 0.4985% ( 10) 00:12:00.967 8877.149 - 8936.727: 0.5272% ( 3) 00:12:00.967 8936.727 - 8996.305: 0.5752% ( 5) 00:12:00.967 8996.305 - 9055.884: 0.6518% ( 8) 00:12:00.967 9055.884 - 9115.462: 0.7189% ( 7) 00:12:00.967 9115.462 - 9175.040: 0.7573% ( 4) 00:12:00.967 9175.040 - 9234.618: 0.8531% ( 10) 00:12:00.967 9234.618 - 9294.196: 1.1024% ( 26) 00:12:00.967 9294.196 - 9353.775: 1.1503% ( 5) 00:12:00.967 9353.775 - 9413.353: 1.2270% ( 8) 00:12:00.967 9413.353 - 9472.931: 1.3420% ( 12) 00:12:00.967 9472.931 - 9532.509: 1.4475% ( 11) 00:12:00.967 9532.509 - 9592.087: 1.6488% ( 21) 00:12:00.967 9592.087 - 9651.665: 2.3102% ( 69) 00:12:00.967 9651.665 - 9711.244: 2.6649% ( 37) 00:12:00.967 9711.244 - 9770.822: 3.0483% ( 40) 00:12:00.967 9770.822 - 9830.400: 3.7673% ( 75) 00:12:00.967 9830.400 - 9889.978: 4.8409% ( 112) 00:12:00.967 9889.978 - 9949.556: 6.0104% ( 122) 00:12:00.967 9949.556 - 10009.135: 7.1415% ( 118) 00:12:00.967 10009.135 - 10068.713: 8.5985% ( 152) 00:12:00.967 10068.713 - 10128.291: 9.7776% ( 123) 00:12:00.967 10128.291 - 10187.869: 11.1963% ( 148) 00:12:00.967 10187.869 - 10247.447: 12.7396% ( 161) 00:12:00.967 10247.447 - 10307.025: 14.4268% ( 176) 00:12:00.967 10307.025 - 10366.604: 16.9479% ( 263) 00:12:00.967 10366.604 - 10426.182: 19.2101% ( 236) 00:12:00.967 10426.182 - 10485.760: 21.9996% ( 291) 00:12:00.967 10485.760 - 10545.338: 25.1917% ( 333) 00:12:00.967 10545.338 - 10604.916: 28.6810% ( 364) 00:12:00.967 10604.916 - 10664.495: 31.4609% ( 290) 00:12:00.967 10664.495 - 10724.073: 34.3846% ( 305) 00:12:00.967 10724.073 - 10783.651: 37.1453% ( 288) 00:12:00.967 10783.651 - 10843.229: 39.0625% ( 200) 00:12:00.967 10843.229 - 10902.807: 41.2864% ( 232) 00:12:00.967 10902.807 - 10962.385: 43.0982% ( 189) 00:12:00.967 10962.385 - 11021.964: 44.5840% ( 155) 00:12:00.967 11021.964 - 11081.542: 45.8493% ( 132) 00:12:00.967 11081.542 - 11141.120: 47.2488% ( 146) 00:12:00.967 11141.120 - 11200.698: 48.6484% ( 146) 00:12:00.967 11200.698 - 11260.276: 49.9329% ( 134) 00:12:00.967 11260.276 - 11319.855: 50.9873% ( 110) 00:12:00.967 11319.855 - 11379.433: 51.9555% ( 101) 00:12:00.967 11379.433 - 11439.011: 53.1250% ( 122) 00:12:00.967 11439.011 - 11498.589: 53.9590% ( 87) 00:12:00.967 11498.589 - 11558.167: 54.9271% ( 101) 00:12:00.967 11558.167 - 11617.745: 55.4735% ( 57) 00:12:00.967 11617.745 - 11677.324: 55.9433% ( 49) 00:12:00.967 11677.324 - 11736.902: 56.5855% ( 67) 00:12:00.967 11736.902 - 11796.480: 57.2853% ( 73) 00:12:00.967 11796.480 - 11856.058: 58.0617% ( 81) 00:12:00.967 11856.058 - 11915.636: 58.8669% ( 84) 00:12:00.967 11915.636 - 11975.215: 59.4133% ( 57) 00:12:00.967 11975.215 - 12034.793: 60.2952% ( 92) 00:12:00.967 12034.793 - 12094.371: 60.9471% ( 68) 00:12:00.967 12094.371 - 12153.949: 62.1837% ( 129) 00:12:00.967 12153.949 - 12213.527: 62.9889% ( 84) 00:12:00.967 12213.527 - 12273.105: 63.5640% ( 60) 00:12:00.967 12273.105 - 12332.684: 64.2926% ( 76) 00:12:00.967 12332.684 - 12392.262: 65.1745% ( 92) 00:12:00.967 12392.262 - 12451.840: 66.0180% ( 88) 00:12:00.967 12451.840 - 12511.418: 66.8999% ( 92) 00:12:00.967 12511.418 - 12570.996: 67.9735% ( 112) 00:12:00.967 12570.996 - 12630.575: 69.0663% ( 114) 00:12:00.967 12630.575 - 12690.153: 70.0249% ( 100) 00:12:00.967 12690.153 - 12749.731: 70.8781% ( 89) 00:12:00.967 12749.731 - 12809.309: 71.7312% ( 89) 00:12:00.967 12809.309 - 12868.887: 72.4310% ( 73) 00:12:00.967 12868.887 - 12928.465: 73.1499% ( 75) 00:12:00.967 12928.465 - 12988.044: 73.8018% ( 68) 00:12:00.967 12988.044 - 13047.622: 74.7508% ( 99) 00:12:00.967 13047.622 - 13107.200: 75.4410% ( 72) 00:12:00.967 13107.200 - 13166.778: 75.9011% ( 48) 00:12:00.967 13166.778 - 13226.356: 76.3516% ( 47) 00:12:00.967 13226.356 - 13285.935: 76.8117% ( 48) 00:12:00.967 13285.935 - 13345.513: 77.1952% ( 40) 00:12:00.967 13345.513 - 13405.091: 77.4636% ( 28) 00:12:00.967 13405.091 - 13464.669: 77.6745% ( 22) 00:12:00.967 13464.669 - 13524.247: 77.8662% ( 20) 00:12:00.967 13524.247 - 13583.825: 77.9908% ( 13) 00:12:00.967 13583.825 - 13643.404: 78.1442% ( 16) 00:12:00.967 13643.404 - 13702.982: 78.2975% ( 16) 00:12:00.967 13702.982 - 13762.560: 78.4030% ( 11) 00:12:00.967 13762.560 - 13822.138: 78.4988% ( 10) 00:12:00.967 13822.138 - 13881.716: 78.5660% ( 7) 00:12:00.967 13881.716 - 13941.295: 78.6235% ( 6) 00:12:00.967 13941.295 - 14000.873: 78.6714% ( 5) 00:12:00.967 14000.873 - 14060.451: 78.7002% ( 3) 00:12:00.967 14060.451 - 14120.029: 78.7289% ( 3) 00:12:00.967 14120.029 - 14179.607: 78.7577% ( 3) 00:12:00.967 14179.607 - 14239.185: 78.7960% ( 4) 00:12:00.967 14239.185 - 14298.764: 78.8248% ( 3) 00:12:00.967 14298.764 - 14358.342: 78.8631% ( 4) 00:12:00.967 14358.342 - 14417.920: 78.9015% ( 4) 00:12:00.967 14417.920 - 14477.498: 78.9206% ( 2) 00:12:00.967 14477.498 - 14537.076: 78.9494% ( 3) 00:12:00.967 14537.076 - 14596.655: 78.9877% ( 4) 00:12:00.968 14596.655 - 14656.233: 79.0644% ( 8) 00:12:00.968 14656.233 - 14715.811: 79.1219% ( 6) 00:12:00.968 14715.811 - 14775.389: 79.2657% ( 15) 00:12:00.968 14775.389 - 14834.967: 79.4574% ( 20) 00:12:00.968 14834.967 - 14894.545: 79.6971% ( 25) 00:12:00.968 14894.545 - 14954.124: 80.0326% ( 35) 00:12:00.968 14954.124 - 15013.702: 80.5406% ( 53) 00:12:00.968 15013.702 - 15073.280: 81.0487% ( 53) 00:12:00.968 15073.280 - 15132.858: 81.7197% ( 70) 00:12:00.968 15132.858 - 15192.436: 82.2853% ( 59) 00:12:00.968 15192.436 - 15252.015: 82.8796% ( 62) 00:12:00.968 15252.015 - 15371.171: 84.2408% ( 142) 00:12:00.968 15371.171 - 15490.327: 85.9663% ( 180) 00:12:00.968 15490.327 - 15609.484: 87.6150% ( 172) 00:12:00.968 15609.484 - 15728.640: 89.1679% ( 162) 00:12:00.968 15728.640 - 15847.796: 90.4908% ( 138) 00:12:00.968 15847.796 - 15966.953: 91.8616% ( 143) 00:12:00.968 15966.953 - 16086.109: 93.2803% ( 148) 00:12:00.968 16086.109 - 16205.265: 94.1047% ( 86) 00:12:00.968 16205.265 - 16324.422: 94.8524% ( 78) 00:12:00.968 16324.422 - 16443.578: 95.7630% ( 95) 00:12:00.968 16443.578 - 16562.735: 96.4628% ( 73) 00:12:00.968 16562.735 - 16681.891: 97.3926% ( 97) 00:12:00.968 16681.891 - 16801.047: 97.8623% ( 49) 00:12:00.968 16801.047 - 16920.204: 98.2170% ( 37) 00:12:00.968 16920.204 - 17039.360: 98.4758% ( 27) 00:12:00.968 17039.360 - 17158.516: 98.6292% ( 16) 00:12:00.968 17158.516 - 17277.673: 98.6771% ( 5) 00:12:00.968 17277.673 - 17396.829: 98.7251% ( 5) 00:12:00.968 17396.829 - 17515.985: 98.7730% ( 5) 00:12:00.968 21209.833 - 21328.989: 98.8880% ( 12) 00:12:00.968 21328.989 - 21448.145: 99.0222% ( 14) 00:12:00.968 21448.145 - 21567.302: 99.0510% ( 3) 00:12:00.968 21567.302 - 21686.458: 99.0798% ( 3) 00:12:00.968 21686.458 - 21805.615: 99.1085% ( 3) 00:12:00.968 21805.615 - 21924.771: 99.1373% ( 3) 00:12:00.968 21924.771 - 22043.927: 99.1660% ( 3) 00:12:00.968 22043.927 - 22163.084: 99.1948% ( 3) 00:12:00.968 22163.084 - 22282.240: 99.2235% ( 3) 00:12:00.968 22282.240 - 22401.396: 99.2523% ( 3) 00:12:00.968 22401.396 - 22520.553: 99.2811% ( 3) 00:12:00.968 22520.553 - 22639.709: 99.3002% ( 2) 00:12:00.968 22639.709 - 22758.865: 99.3290% ( 3) 00:12:00.968 22758.865 - 22878.022: 99.3482% ( 2) 00:12:00.968 22878.022 - 22997.178: 99.3769% ( 3) 00:12:00.968 22997.178 - 23116.335: 99.3865% ( 1) 00:12:00.968 26571.869 - 26691.025: 99.4536% ( 7) 00:12:00.968 26691.025 - 26810.182: 99.5207% ( 7) 00:12:00.968 26810.182 - 26929.338: 99.7220% ( 21) 00:12:00.968 28359.215 - 28478.371: 99.7412% ( 2) 00:12:00.968 28478.371 - 28597.527: 99.7604% ( 2) 00:12:00.968 28597.527 - 28716.684: 99.7891% ( 3) 00:12:00.968 28716.684 - 28835.840: 99.8179% ( 3) 00:12:00.968 28835.840 - 28954.996: 99.8370% ( 2) 00:12:00.968 28954.996 - 29074.153: 99.8754% ( 4) 00:12:00.968 29074.153 - 29193.309: 99.9041% ( 3) 00:12:00.968 29193.309 - 29312.465: 99.9329% ( 3) 00:12:00.968 29312.465 - 29431.622: 99.9617% ( 3) 00:12:00.968 29431.622 - 29550.778: 99.9904% ( 3) 00:12:00.968 29550.778 - 29669.935: 100.0000% ( 1) 00:12:00.968 00:12:00.968 09:08:55 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:00.968 00:12:00.968 real 0m2.726s 00:12:00.968 user 0m2.298s 00:12:00.968 sys 0m0.314s 00:12:00.968 09:08:55 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.968 09:08:55 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:12:00.968 ************************************ 00:12:00.968 END TEST nvme_perf 00:12:00.968 ************************************ 00:12:00.968 09:08:55 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:00.968 09:08:55 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:00.968 09:08:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.968 09:08:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:00.968 ************************************ 00:12:00.968 START TEST nvme_hello_world 00:12:00.968 ************************************ 00:12:00.968 09:08:55 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:01.226 Initializing NVMe Controllers 00:12:01.226 Attached to 0000:00:10.0 00:12:01.226 Namespace ID: 1 size: 6GB 00:12:01.226 Attached to 0000:00:11.0 00:12:01.226 Namespace ID: 1 size: 5GB 00:12:01.226 Attached to 0000:00:13.0 00:12:01.226 Namespace ID: 1 size: 1GB 00:12:01.226 Attached to 0000:00:12.0 00:12:01.226 Namespace ID: 1 size: 4GB 00:12:01.226 Namespace ID: 2 size: 4GB 00:12:01.226 Namespace ID: 3 size: 4GB 00:12:01.226 Initialization complete. 00:12:01.226 INFO: using host memory buffer for IO 00:12:01.226 Hello world! 00:12:01.226 INFO: using host memory buffer for IO 00:12:01.226 Hello world! 00:12:01.226 INFO: using host memory buffer for IO 00:12:01.226 Hello world! 00:12:01.226 INFO: using host memory buffer for IO 00:12:01.226 Hello world! 00:12:01.226 INFO: using host memory buffer for IO 00:12:01.226 Hello world! 00:12:01.226 INFO: using host memory buffer for IO 00:12:01.226 Hello world! 00:12:01.226 00:12:01.226 real 0m0.355s 00:12:01.226 user 0m0.143s 00:12:01.226 sys 0m0.162s 00:12:01.226 09:08:56 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.226 09:08:56 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:01.226 ************************************ 00:12:01.226 END TEST nvme_hello_world 00:12:01.226 ************************************ 00:12:01.226 09:08:56 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:01.226 09:08:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.226 09:08:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.226 09:08:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.226 ************************************ 00:12:01.226 START TEST nvme_sgl 00:12:01.226 ************************************ 00:12:01.226 09:08:56 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:01.485 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:12:01.485 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:12:01.485 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:12:01.743 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:12:01.743 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:12:01.743 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:12:01.743 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:12:01.744 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:12:01.744 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:12:01.744 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:12:01.744 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:12:01.744 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:12:01.744 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:12:01.744 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:12:01.744 NVMe Readv/Writev Request test 00:12:01.744 Attached to 0000:00:10.0 00:12:01.744 Attached to 0000:00:11.0 00:12:01.744 Attached to 0000:00:13.0 00:12:01.744 Attached to 0000:00:12.0 00:12:01.744 0000:00:10.0: build_io_request_2 test passed 00:12:01.744 0000:00:10.0: build_io_request_4 test passed 00:12:01.744 0000:00:10.0: build_io_request_5 test passed 00:12:01.744 0000:00:10.0: build_io_request_6 test passed 00:12:01.744 0000:00:10.0: build_io_request_7 test passed 00:12:01.744 0000:00:10.0: build_io_request_10 test passed 00:12:01.744 0000:00:11.0: build_io_request_2 test passed 00:12:01.744 0000:00:11.0: build_io_request_4 test passed 00:12:01.744 0000:00:11.0: build_io_request_5 test passed 00:12:01.744 0000:00:11.0: build_io_request_6 test passed 00:12:01.744 0000:00:11.0: build_io_request_7 test passed 00:12:01.744 0000:00:11.0: build_io_request_10 test passed 00:12:01.744 Cleaning up... 00:12:01.744 00:12:01.744 real 0m0.442s 00:12:01.744 user 0m0.215s 00:12:01.744 sys 0m0.176s 00:12:01.744 09:08:56 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.744 ************************************ 00:12:01.744 END TEST nvme_sgl 00:12:01.744 ************************************ 00:12:01.744 09:08:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:12:01.744 09:08:56 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:01.744 09:08:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.744 09:08:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.744 09:08:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.744 ************************************ 00:12:01.744 START TEST nvme_e2edp 00:12:01.744 ************************************ 00:12:01.744 09:08:56 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:02.003 NVMe Write/Read with End-to-End data protection test 00:12:02.003 Attached to 0000:00:10.0 00:12:02.003 Attached to 0000:00:11.0 00:12:02.003 Attached to 0000:00:13.0 00:12:02.003 Attached to 0000:00:12.0 00:12:02.003 Cleaning up... 00:12:02.262 00:12:02.262 real 0m0.352s 00:12:02.262 user 0m0.135s 00:12:02.262 sys 0m0.165s 00:12:02.262 09:08:57 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.262 ************************************ 00:12:02.262 END TEST nvme_e2edp 00:12:02.262 09:08:57 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:12:02.262 ************************************ 00:12:02.262 09:08:57 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:02.262 09:08:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.262 09:08:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.262 09:08:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:02.262 ************************************ 00:12:02.262 START TEST nvme_reserve 00:12:02.262 ************************************ 00:12:02.262 09:08:57 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:02.520 ===================================================== 00:12:02.520 NVMe Controller at PCI bus 0, device 16, function 0 00:12:02.520 ===================================================== 00:12:02.520 Reservations: Not Supported 00:12:02.520 ===================================================== 00:12:02.520 NVMe Controller at PCI bus 0, device 17, function 0 00:12:02.520 ===================================================== 00:12:02.520 Reservations: Not Supported 00:12:02.520 ===================================================== 00:12:02.520 NVMe Controller at PCI bus 0, device 19, function 0 00:12:02.520 ===================================================== 00:12:02.520 Reservations: Not Supported 00:12:02.520 ===================================================== 00:12:02.520 NVMe Controller at PCI bus 0, device 18, function 0 00:12:02.520 ===================================================== 00:12:02.520 Reservations: Not Supported 00:12:02.520 Reservation test passed 00:12:02.520 ************************************ 00:12:02.520 END TEST nvme_reserve 00:12:02.520 ************************************ 00:12:02.520 00:12:02.520 real 0m0.349s 00:12:02.520 user 0m0.142s 00:12:02.520 sys 0m0.156s 00:12:02.520 09:08:57 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.520 09:08:57 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:12:02.520 09:08:57 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:02.520 09:08:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:02.520 09:08:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.520 09:08:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:02.520 ************************************ 00:12:02.520 START TEST nvme_err_injection 00:12:02.520 ************************************ 00:12:02.520 09:08:57 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:03.116 NVMe Error Injection test 00:12:03.116 Attached to 0000:00:10.0 00:12:03.116 Attached to 0000:00:11.0 00:12:03.116 Attached to 0000:00:13.0 00:12:03.116 Attached to 0000:00:12.0 00:12:03.116 0000:00:13.0: get features failed as expected 00:12:03.116 0000:00:12.0: get features failed as expected 00:12:03.116 0000:00:10.0: get features failed as expected 00:12:03.116 0000:00:11.0: get features failed as expected 00:12:03.117 0000:00:10.0: get features successfully as expected 00:12:03.117 0000:00:11.0: get features successfully as expected 00:12:03.117 0000:00:13.0: get features successfully as expected 00:12:03.117 0000:00:12.0: get features successfully as expected 00:12:03.117 0000:00:10.0: read failed as expected 00:12:03.117 0000:00:11.0: read failed as expected 00:12:03.117 0000:00:13.0: read failed as expected 00:12:03.117 0000:00:12.0: read failed as expected 00:12:03.117 0000:00:10.0: read successfully as expected 00:12:03.117 0000:00:11.0: read successfully as expected 00:12:03.117 0000:00:13.0: read successfully as expected 00:12:03.117 0000:00:12.0: read successfully as expected 00:12:03.117 Cleaning up... 00:12:03.117 ************************************ 00:12:03.117 END TEST nvme_err_injection 00:12:03.117 ************************************ 00:12:03.117 00:12:03.117 real 0m0.357s 00:12:03.117 user 0m0.151s 00:12:03.117 sys 0m0.157s 00:12:03.117 09:08:57 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.117 09:08:57 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:12:03.117 09:08:57 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:03.117 09:08:57 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:12:03.117 09:08:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.117 09:08:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:03.117 ************************************ 00:12:03.117 START TEST nvme_overhead 00:12:03.117 ************************************ 00:12:03.117 09:08:57 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:04.494 Initializing NVMe Controllers 00:12:04.494 Attached to 0000:00:10.0 00:12:04.494 Attached to 0000:00:11.0 00:12:04.494 Attached to 0000:00:13.0 00:12:04.494 Attached to 0000:00:12.0 00:12:04.494 Initialization complete. Launching workers. 00:12:04.494 submit (in ns) avg, min, max = 16435.9, 11532.7, 161572.7 00:12:04.494 complete (in ns) avg, min, max = 12131.3, 8058.2, 285432.7 00:12:04.494 00:12:04.494 Submit histogram 00:12:04.494 ================ 00:12:04.494 Range in us Cumulative Count 00:12:04.494 11.520 - 11.578: 0.0528% ( 4) 00:12:04.495 11.578 - 11.636: 0.1057% ( 4) 00:12:04.495 11.636 - 11.695: 0.1982% ( 7) 00:12:04.495 11.695 - 11.753: 0.4492% ( 19) 00:12:04.495 11.753 - 11.811: 0.6606% ( 16) 00:12:04.495 11.811 - 11.869: 1.0041% ( 26) 00:12:04.495 11.869 - 11.927: 1.4797% ( 36) 00:12:04.495 11.927 - 11.985: 1.9818% ( 38) 00:12:04.495 11.985 - 12.044: 2.5895% ( 46) 00:12:04.495 12.044 - 12.102: 3.5540% ( 73) 00:12:04.495 12.102 - 12.160: 4.4127% ( 65) 00:12:04.495 12.160 - 12.218: 5.0205% ( 46) 00:12:04.495 12.218 - 12.276: 5.4961% ( 36) 00:12:04.495 12.276 - 12.335: 5.9453% ( 34) 00:12:04.495 12.335 - 12.393: 6.6059% ( 50) 00:12:04.495 12.393 - 12.451: 7.2401% ( 48) 00:12:04.495 12.451 - 12.509: 7.6364% ( 30) 00:12:04.495 12.509 - 12.567: 8.0460% ( 31) 00:12:04.495 12.567 - 12.625: 8.3498% ( 23) 00:12:04.495 12.625 - 12.684: 8.5744% ( 17) 00:12:04.495 12.684 - 12.742: 8.7330% ( 12) 00:12:04.495 12.742 - 12.800: 8.9708% ( 18) 00:12:04.495 12.800 - 12.858: 9.1690% ( 15) 00:12:04.495 12.858 - 12.916: 9.4993% ( 25) 00:12:04.495 12.916 - 12.975: 9.8692% ( 28) 00:12:04.495 12.975 - 13.033: 10.3845% ( 39) 00:12:04.495 13.033 - 13.091: 11.0186% ( 48) 00:12:04.495 13.091 - 13.149: 12.0756% ( 80) 00:12:04.495 13.149 - 13.207: 13.1457% ( 81) 00:12:04.495 13.207 - 13.265: 14.2291% ( 82) 00:12:04.495 13.265 - 13.324: 15.6824% ( 110) 00:12:04.495 13.324 - 13.382: 17.3339% ( 125) 00:12:04.495 13.382 - 13.440: 19.4477% ( 160) 00:12:04.495 13.440 - 13.498: 22.0637% ( 198) 00:12:04.495 13.498 - 13.556: 24.2172% ( 163) 00:12:04.495 13.556 - 13.615: 26.5821% ( 179) 00:12:04.495 13.615 - 13.673: 28.6960% ( 160) 00:12:04.495 13.673 - 13.731: 31.1270% ( 184) 00:12:04.495 13.731 - 13.789: 33.1219% ( 151) 00:12:04.495 13.789 - 13.847: 35.3944% ( 172) 00:12:04.495 13.847 - 13.905: 37.2440% ( 140) 00:12:04.495 13.905 - 13.964: 39.4372% ( 166) 00:12:04.495 13.964 - 14.022: 40.8905% ( 110) 00:12:04.495 14.022 - 14.080: 42.1852% ( 98) 00:12:04.495 14.080 - 14.138: 43.4668% ( 97) 00:12:04.495 14.138 - 14.196: 44.6823% ( 92) 00:12:04.495 14.196 - 14.255: 46.0827% ( 106) 00:12:04.495 14.255 - 14.313: 47.0868% ( 76) 00:12:04.495 14.313 - 14.371: 47.9059% ( 62) 00:12:04.495 14.371 - 14.429: 49.0289% ( 85) 00:12:04.495 14.429 - 14.487: 49.9670% ( 71) 00:12:04.495 14.487 - 14.545: 51.0503% ( 82) 00:12:04.495 14.545 - 14.604: 52.2658% ( 92) 00:12:04.495 14.604 - 14.662: 53.4417% ( 89) 00:12:04.495 14.662 - 14.720: 54.5118% ( 81) 00:12:04.495 14.720 - 14.778: 55.3838% ( 66) 00:12:04.495 14.778 - 14.836: 56.3483% ( 73) 00:12:04.495 14.836 - 14.895: 56.9956% ( 49) 00:12:04.495 14.895 - 15.011: 58.0394% ( 79) 00:12:04.495 15.011 - 15.127: 58.7396% ( 53) 00:12:04.495 15.127 - 15.244: 59.3341% ( 45) 00:12:04.495 15.244 - 15.360: 59.7305% ( 30) 00:12:04.495 15.360 - 15.476: 60.0608% ( 25) 00:12:04.495 15.476 - 15.593: 60.4307% ( 28) 00:12:04.495 15.593 - 15.709: 60.7082% ( 21) 00:12:04.495 15.709 - 15.825: 60.9460% ( 18) 00:12:04.495 15.825 - 15.942: 61.1838% ( 18) 00:12:04.495 15.942 - 16.058: 61.3820% ( 15) 00:12:04.495 16.058 - 16.175: 62.9277% ( 117) 00:12:04.495 16.175 - 16.291: 67.8689% ( 374) 00:12:04.495 16.291 - 16.407: 73.8935% ( 456) 00:12:04.495 16.407 - 16.524: 76.6680% ( 210) 00:12:04.495 16.524 - 16.640: 78.3723% ( 129) 00:12:04.495 16.640 - 16.756: 80.0502% ( 127) 00:12:04.495 16.756 - 16.873: 81.3978% ( 102) 00:12:04.495 16.873 - 16.989: 82.2830% ( 67) 00:12:04.495 16.989 - 17.105: 82.7190% ( 33) 00:12:04.495 17.105 - 17.222: 83.1021% ( 29) 00:12:04.495 17.222 - 17.338: 83.4060% ( 23) 00:12:04.495 17.338 - 17.455: 83.6306% ( 17) 00:12:04.495 17.455 - 17.571: 83.8552% ( 17) 00:12:04.495 17.571 - 17.687: 84.0534% ( 15) 00:12:04.495 17.687 - 17.804: 84.1987% ( 11) 00:12:04.495 17.804 - 17.920: 84.3176% ( 9) 00:12:04.495 17.920 - 18.036: 84.4101% ( 7) 00:12:04.495 18.036 - 18.153: 84.5422% ( 10) 00:12:04.495 18.153 - 18.269: 84.6215% ( 6) 00:12:04.495 18.269 - 18.385: 84.7272% ( 8) 00:12:04.495 18.385 - 18.502: 84.8064% ( 6) 00:12:04.495 18.502 - 18.618: 84.8461% ( 3) 00:12:04.495 18.618 - 18.735: 85.0178% ( 13) 00:12:04.495 18.735 - 18.851: 85.1367% ( 9) 00:12:04.495 18.851 - 18.967: 85.2292% ( 7) 00:12:04.495 18.967 - 19.084: 85.3481% ( 9) 00:12:04.495 19.084 - 19.200: 85.4538% ( 8) 00:12:04.495 19.200 - 19.316: 85.5595% ( 8) 00:12:04.495 19.316 - 19.433: 85.6520% ( 7) 00:12:04.495 19.433 - 19.549: 85.8105% ( 12) 00:12:04.495 19.549 - 19.665: 85.9162% ( 8) 00:12:04.495 19.665 - 19.782: 86.0484% ( 10) 00:12:04.495 19.782 - 19.898: 86.2069% ( 12) 00:12:04.495 19.898 - 20.015: 86.2862% ( 6) 00:12:04.495 20.015 - 20.131: 86.3786% ( 7) 00:12:04.495 20.131 - 20.247: 86.5240% ( 11) 00:12:04.495 20.247 - 20.364: 86.6561% ( 10) 00:12:04.495 20.364 - 20.480: 86.7750% ( 9) 00:12:04.495 20.480 - 20.596: 86.8411% ( 5) 00:12:04.495 20.596 - 20.713: 86.9468% ( 8) 00:12:04.495 20.713 - 20.829: 87.0392% ( 7) 00:12:04.495 20.829 - 20.945: 87.1846% ( 11) 00:12:04.495 20.945 - 21.062: 87.3167% ( 10) 00:12:04.495 21.062 - 21.178: 87.3827% ( 5) 00:12:04.495 21.178 - 21.295: 87.4620% ( 6) 00:12:04.495 21.295 - 21.411: 87.5149% ( 4) 00:12:04.495 21.411 - 21.527: 87.6734% ( 12) 00:12:04.495 21.527 - 21.644: 87.8055% ( 10) 00:12:04.495 21.644 - 21.760: 87.8716% ( 5) 00:12:04.495 21.760 - 21.876: 87.9509% ( 6) 00:12:04.495 21.876 - 21.993: 88.0301% ( 6) 00:12:04.495 21.993 - 22.109: 88.2019% ( 13) 00:12:04.495 22.109 - 22.225: 88.2811% ( 6) 00:12:04.495 22.225 - 22.342: 88.3604% ( 6) 00:12:04.495 22.342 - 22.458: 88.4265% ( 5) 00:12:04.495 22.458 - 22.575: 88.4793% ( 4) 00:12:04.495 22.575 - 22.691: 88.5586% ( 6) 00:12:04.495 22.691 - 22.807: 88.7568% ( 15) 00:12:04.495 22.807 - 22.924: 88.8757% ( 9) 00:12:04.495 22.924 - 23.040: 88.9285% ( 4) 00:12:04.495 23.040 - 23.156: 89.0606% ( 10) 00:12:04.495 23.156 - 23.273: 89.1531% ( 7) 00:12:04.495 23.273 - 23.389: 89.2720% ( 9) 00:12:04.495 23.389 - 23.505: 89.3117% ( 3) 00:12:04.495 23.505 - 23.622: 89.3777% ( 5) 00:12:04.495 23.622 - 23.738: 89.5098% ( 10) 00:12:04.495 23.738 - 23.855: 89.6684% ( 12) 00:12:04.495 23.855 - 23.971: 89.7741% ( 8) 00:12:04.495 23.971 - 24.087: 89.9062% ( 10) 00:12:04.495 24.087 - 24.204: 90.0251% ( 9) 00:12:04.495 24.204 - 24.320: 90.0779% ( 4) 00:12:04.495 24.320 - 24.436: 90.0912% ( 1) 00:12:04.495 24.436 - 24.553: 90.1308% ( 3) 00:12:04.495 24.553 - 24.669: 90.1572% ( 2) 00:12:04.495 24.669 - 24.785: 90.2365% ( 6) 00:12:04.495 24.785 - 24.902: 90.3025% ( 5) 00:12:04.495 24.902 - 25.018: 90.3686% ( 5) 00:12:04.495 25.018 - 25.135: 90.4479% ( 6) 00:12:04.495 25.135 - 25.251: 90.5139% ( 5) 00:12:04.495 25.251 - 25.367: 90.5668% ( 4) 00:12:04.495 25.367 - 25.484: 90.6064% ( 3) 00:12:04.495 25.484 - 25.600: 90.6461% ( 3) 00:12:04.495 25.600 - 25.716: 90.7121% ( 5) 00:12:04.495 25.716 - 25.833: 90.7650% ( 4) 00:12:04.495 25.833 - 25.949: 90.8046% ( 3) 00:12:04.495 25.949 - 26.065: 90.8707% ( 5) 00:12:04.495 26.065 - 26.182: 90.9764% ( 8) 00:12:04.495 26.182 - 26.298: 91.0688% ( 7) 00:12:04.495 26.298 - 26.415: 91.1481% ( 6) 00:12:04.495 26.415 - 26.531: 91.2274% ( 6) 00:12:04.495 26.531 - 26.647: 91.2934% ( 5) 00:12:04.495 26.647 - 26.764: 91.3727% ( 6) 00:12:04.495 26.764 - 26.880: 91.5048% ( 10) 00:12:04.495 26.880 - 26.996: 91.6237% ( 9) 00:12:04.495 26.996 - 27.113: 91.8219% ( 15) 00:12:04.495 27.113 - 27.229: 91.9672% ( 11) 00:12:04.495 27.229 - 27.345: 92.0994% ( 10) 00:12:04.495 27.345 - 27.462: 92.2843% ( 14) 00:12:04.495 27.462 - 27.578: 92.4825% ( 15) 00:12:04.495 27.578 - 27.695: 92.6675% ( 14) 00:12:04.495 27.695 - 27.811: 92.7996% ( 10) 00:12:04.495 27.811 - 27.927: 92.9185% ( 9) 00:12:04.495 27.927 - 28.044: 93.1167% ( 15) 00:12:04.495 28.044 - 28.160: 93.2884% ( 13) 00:12:04.495 28.160 - 28.276: 93.6187% ( 25) 00:12:04.495 28.276 - 28.393: 93.8433% ( 17) 00:12:04.495 28.393 - 28.509: 94.1208% ( 21) 00:12:04.495 28.509 - 28.625: 94.5832% ( 35) 00:12:04.495 28.625 - 28.742: 95.0324% ( 34) 00:12:04.495 28.742 - 28.858: 95.5212% ( 37) 00:12:04.495 28.858 - 28.975: 96.1157% ( 45) 00:12:04.495 28.975 - 29.091: 96.5385% ( 32) 00:12:04.495 29.091 - 29.207: 96.8160% ( 21) 00:12:04.495 29.207 - 29.324: 97.1066% ( 22) 00:12:04.496 29.324 - 29.440: 97.3709% ( 20) 00:12:04.496 29.440 - 29.556: 97.5822% ( 16) 00:12:04.496 29.556 - 29.673: 97.7936% ( 16) 00:12:04.496 29.673 - 29.789: 97.8993% ( 8) 00:12:04.496 29.789 - 30.022: 98.0579% ( 12) 00:12:04.496 30.022 - 30.255: 98.2164% ( 12) 00:12:04.496 30.255 - 30.487: 98.2296% ( 1) 00:12:04.496 30.487 - 30.720: 98.2825% ( 4) 00:12:04.496 30.720 - 30.953: 98.3882% ( 8) 00:12:04.496 30.953 - 31.185: 98.4410% ( 4) 00:12:04.496 31.185 - 31.418: 98.5071% ( 5) 00:12:04.496 31.418 - 31.651: 98.5731% ( 5) 00:12:04.496 31.651 - 31.884: 98.5863% ( 1) 00:12:04.496 31.884 - 32.116: 98.5996% ( 1) 00:12:04.496 32.116 - 32.349: 98.6260% ( 2) 00:12:04.496 32.582 - 32.815: 98.6524% ( 2) 00:12:04.496 32.815 - 33.047: 98.7052% ( 4) 00:12:04.496 33.280 - 33.513: 98.7185% ( 1) 00:12:04.496 33.745 - 33.978: 98.7581% ( 3) 00:12:04.496 34.211 - 34.444: 98.8242% ( 5) 00:12:04.496 34.444 - 34.676: 98.8770% ( 4) 00:12:04.496 34.676 - 34.909: 98.9034% ( 2) 00:12:04.496 34.909 - 35.142: 98.9563% ( 4) 00:12:04.496 35.142 - 35.375: 98.9959% ( 3) 00:12:04.496 35.375 - 35.607: 99.0620% ( 5) 00:12:04.496 35.607 - 35.840: 99.1016% ( 3) 00:12:04.496 35.840 - 36.073: 99.1412% ( 3) 00:12:04.496 36.073 - 36.305: 99.1544% ( 1) 00:12:04.496 36.305 - 36.538: 99.1809% ( 2) 00:12:04.496 36.538 - 36.771: 99.1941% ( 1) 00:12:04.496 36.771 - 37.004: 99.2073% ( 1) 00:12:04.496 37.004 - 37.236: 99.2469% ( 3) 00:12:04.496 37.469 - 37.702: 99.2601% ( 1) 00:12:04.496 37.702 - 37.935: 99.2998% ( 3) 00:12:04.496 37.935 - 38.167: 99.3130% ( 1) 00:12:04.496 38.167 - 38.400: 99.3262% ( 1) 00:12:04.496 38.400 - 38.633: 99.3526% ( 2) 00:12:04.496 39.098 - 39.331: 99.3658% ( 1) 00:12:04.496 39.564 - 39.796: 99.3923% ( 2) 00:12:04.496 39.796 - 40.029: 99.4055% ( 1) 00:12:04.496 40.262 - 40.495: 99.4187% ( 1) 00:12:04.496 40.960 - 41.193: 99.4319% ( 1) 00:12:04.496 41.193 - 41.425: 99.4451% ( 1) 00:12:04.496 41.425 - 41.658: 99.4583% ( 1) 00:12:04.496 43.055 - 43.287: 99.4847% ( 2) 00:12:04.496 43.753 - 43.985: 99.5244% ( 3) 00:12:04.496 44.218 - 44.451: 99.5376% ( 1) 00:12:04.496 44.684 - 44.916: 99.5772% ( 3) 00:12:04.496 44.916 - 45.149: 99.5904% ( 1) 00:12:04.496 45.149 - 45.382: 99.6036% ( 1) 00:12:04.496 45.382 - 45.615: 99.6301% ( 2) 00:12:04.496 45.615 - 45.847: 99.6433% ( 1) 00:12:04.496 46.080 - 46.313: 99.6697% ( 2) 00:12:04.496 46.313 - 46.545: 99.6961% ( 2) 00:12:04.496 47.476 - 47.709: 99.7093% ( 1) 00:12:04.496 47.942 - 48.175: 99.7226% ( 1) 00:12:04.496 48.873 - 49.105: 99.7358% ( 1) 00:12:04.496 49.338 - 49.571: 99.7490% ( 1) 00:12:04.496 50.735 - 50.967: 99.7622% ( 1) 00:12:04.496 50.967 - 51.200: 99.7754% ( 1) 00:12:04.496 51.433 - 51.665: 99.7886% ( 1) 00:12:04.496 51.898 - 52.131: 99.8282% ( 3) 00:12:04.496 52.131 - 52.364: 99.8547% ( 2) 00:12:04.496 53.527 - 53.760: 99.8679% ( 1) 00:12:04.496 54.691 - 54.924: 99.8811% ( 1) 00:12:04.496 57.484 - 57.716: 99.8943% ( 1) 00:12:04.496 59.113 - 59.345: 99.9207% ( 2) 00:12:04.496 63.302 - 63.767: 99.9339% ( 1) 00:12:04.496 63.767 - 64.233: 99.9472% ( 1) 00:12:04.496 77.731 - 78.196: 99.9604% ( 1) 00:12:04.496 91.695 - 92.160: 99.9736% ( 1) 00:12:04.496 160.116 - 161.047: 99.9868% ( 1) 00:12:04.496 161.047 - 161.978: 100.0000% ( 1) 00:12:04.496 00:12:04.496 Complete histogram 00:12:04.496 ================== 00:12:04.496 Range in us Cumulative Count 00:12:04.496 8.029 - 8.087: 0.0132% ( 1) 00:12:04.496 8.087 - 8.145: 0.0661% ( 4) 00:12:04.496 8.145 - 8.204: 0.2774% ( 16) 00:12:04.496 8.204 - 8.262: 0.8984% ( 47) 00:12:04.496 8.262 - 8.320: 1.7307% ( 63) 00:12:04.496 8.320 - 8.378: 3.1444% ( 107) 00:12:04.496 8.378 - 8.436: 4.6505% ( 114) 00:12:04.496 8.436 - 8.495: 6.5663% ( 145) 00:12:04.496 8.495 - 8.553: 8.7990% ( 169) 00:12:04.496 8.553 - 8.611: 11.6132% ( 213) 00:12:04.496 8.611 - 8.669: 13.8988% ( 173) 00:12:04.496 8.669 - 8.727: 16.2505% ( 178) 00:12:04.496 8.727 - 8.785: 18.8929% ( 200) 00:12:04.496 8.785 - 8.844: 22.0901% ( 242) 00:12:04.496 8.844 - 8.902: 24.7721% ( 203) 00:12:04.496 8.902 - 8.960: 27.0577% ( 173) 00:12:04.496 8.960 - 9.018: 29.3302% ( 172) 00:12:04.496 9.018 - 9.076: 32.0782% ( 208) 00:12:04.496 9.076 - 9.135: 35.6850% ( 273) 00:12:04.496 9.135 - 9.193: 38.3274% ( 200) 00:12:04.496 9.193 - 9.251: 40.3884% ( 156) 00:12:04.496 9.251 - 9.309: 42.4759% ( 158) 00:12:04.496 9.309 - 9.367: 45.0126% ( 192) 00:12:04.496 9.367 - 9.425: 47.8399% ( 214) 00:12:04.496 9.425 - 9.484: 50.2312% ( 181) 00:12:04.496 9.484 - 9.542: 51.9620% ( 131) 00:12:04.496 9.542 - 9.600: 53.4152% ( 110) 00:12:04.496 9.600 - 9.658: 54.4193% ( 76) 00:12:04.496 9.658 - 9.716: 55.3045% ( 67) 00:12:04.496 9.716 - 9.775: 56.1237% ( 62) 00:12:04.496 9.775 - 9.833: 56.7314% ( 46) 00:12:04.496 9.833 - 9.891: 57.0353% ( 23) 00:12:04.496 9.891 - 9.949: 57.3920% ( 27) 00:12:04.496 9.949 - 10.007: 57.6562% ( 20) 00:12:04.496 10.007 - 10.065: 57.9469% ( 22) 00:12:04.496 10.065 - 10.124: 58.1583% ( 16) 00:12:04.496 10.124 - 10.182: 58.4489% ( 22) 00:12:04.496 10.182 - 10.240: 58.6867% ( 18) 00:12:04.496 10.240 - 10.298: 58.7792% ( 7) 00:12:04.496 10.298 - 10.356: 59.0038% ( 17) 00:12:04.496 10.356 - 10.415: 59.1756% ( 13) 00:12:04.496 10.415 - 10.473: 59.4398% ( 20) 00:12:04.496 10.473 - 10.531: 59.6512% ( 16) 00:12:04.496 10.531 - 10.589: 59.7437% ( 7) 00:12:04.496 10.589 - 10.647: 59.8362% ( 7) 00:12:04.496 10.647 - 10.705: 59.9947% ( 12) 00:12:04.496 10.705 - 10.764: 60.0740% ( 6) 00:12:04.496 10.764 - 10.822: 60.2325% ( 12) 00:12:04.496 10.822 - 10.880: 60.4175% ( 14) 00:12:04.496 10.880 - 10.938: 60.6025% ( 14) 00:12:04.496 10.938 - 10.996: 60.6553% ( 4) 00:12:04.496 10.996 - 11.055: 60.7478% ( 7) 00:12:04.496 11.055 - 11.113: 60.9195% ( 13) 00:12:04.496 11.113 - 11.171: 60.9856% ( 5) 00:12:04.496 11.171 - 11.229: 61.0384% ( 4) 00:12:04.496 11.229 - 11.287: 61.1309% ( 7) 00:12:04.496 11.287 - 11.345: 61.1706% ( 3) 00:12:04.496 11.345 - 11.404: 61.2366% ( 5) 00:12:04.496 11.404 - 11.462: 61.2763% ( 3) 00:12:04.496 11.462 - 11.520: 61.2895% ( 1) 00:12:04.496 11.520 - 11.578: 61.3555% ( 5) 00:12:04.496 11.578 - 11.636: 61.4744% ( 9) 00:12:04.496 11.636 - 11.695: 61.8576% ( 29) 00:12:04.496 11.695 - 11.753: 63.7204% ( 141) 00:12:04.496 11.753 - 11.811: 67.1687% ( 261) 00:12:04.496 11.811 - 11.869: 71.2776% ( 311) 00:12:04.496 11.869 - 11.927: 74.0917% ( 213) 00:12:04.496 11.927 - 11.985: 76.0867% ( 151) 00:12:04.496 11.985 - 12.044: 76.9983% ( 69) 00:12:04.496 12.044 - 12.102: 77.8306% ( 63) 00:12:04.496 12.102 - 12.160: 78.3459% ( 39) 00:12:04.496 12.160 - 12.218: 78.5837% ( 18) 00:12:04.496 12.218 - 12.276: 78.7422% ( 12) 00:12:04.496 12.276 - 12.335: 78.8744% ( 10) 00:12:04.496 12.335 - 12.393: 79.0065% ( 10) 00:12:04.496 12.393 - 12.451: 79.1914% ( 14) 00:12:04.496 12.451 - 12.509: 79.3896% ( 15) 00:12:04.496 12.509 - 12.567: 79.8256% ( 33) 00:12:04.496 12.567 - 12.625: 80.3277% ( 38) 00:12:04.496 12.625 - 12.684: 80.7372% ( 31) 00:12:04.496 12.684 - 12.742: 81.2789% ( 41) 00:12:04.496 12.742 - 12.800: 81.6356% ( 27) 00:12:04.496 12.800 - 12.858: 81.8999% ( 20) 00:12:04.496 12.858 - 12.916: 82.0188% ( 9) 00:12:04.496 12.916 - 12.975: 82.0980% ( 6) 00:12:04.496 12.975 - 13.033: 82.1773% ( 6) 00:12:04.496 13.033 - 13.091: 82.2434% ( 5) 00:12:04.496 13.091 - 13.149: 82.3491% ( 8) 00:12:04.496 13.149 - 13.207: 82.3887% ( 3) 00:12:04.496 13.207 - 13.265: 82.4151% ( 2) 00:12:04.496 13.265 - 13.324: 82.4283% ( 1) 00:12:04.496 13.324 - 13.382: 82.5208% ( 7) 00:12:04.497 13.382 - 13.440: 82.5472% ( 2) 00:12:04.497 13.440 - 13.498: 82.6001% ( 4) 00:12:04.497 13.498 - 13.556: 82.6793% ( 6) 00:12:04.497 13.556 - 13.615: 82.7454% ( 5) 00:12:04.497 13.615 - 13.673: 82.7586% ( 1) 00:12:04.497 13.673 - 13.731: 82.8379% ( 6) 00:12:04.497 13.731 - 13.789: 82.8907% ( 4) 00:12:04.497 13.905 - 13.964: 82.9172% ( 2) 00:12:04.497 13.964 - 14.022: 82.9304% ( 1) 00:12:04.497 14.080 - 14.138: 82.9568% ( 2) 00:12:04.497 14.138 - 14.196: 82.9832% ( 2) 00:12:04.497 14.255 - 14.313: 83.0493% ( 5) 00:12:04.497 14.313 - 14.371: 83.1550% ( 8) 00:12:04.497 14.371 - 14.429: 83.2078% ( 4) 00:12:04.497 14.429 - 14.487: 83.2607% ( 4) 00:12:04.497 14.487 - 14.545: 83.3267% ( 5) 00:12:04.497 14.545 - 14.604: 83.4324% ( 8) 00:12:04.497 14.604 - 14.662: 83.4985% ( 5) 00:12:04.497 14.662 - 14.720: 83.6174% ( 9) 00:12:04.497 14.720 - 14.778: 83.6834% ( 5) 00:12:04.497 14.778 - 14.836: 83.7627% ( 6) 00:12:04.497 14.836 - 14.895: 83.8024% ( 3) 00:12:04.497 14.895 - 15.011: 83.9080% ( 8) 00:12:04.497 15.011 - 15.127: 83.9741% ( 5) 00:12:04.497 15.127 - 15.244: 84.0666% ( 7) 00:12:04.497 15.244 - 15.360: 84.2251% ( 12) 00:12:04.497 15.360 - 15.476: 84.3440% ( 9) 00:12:04.497 15.476 - 15.593: 84.4762% ( 10) 00:12:04.497 15.593 - 15.709: 84.5422% ( 5) 00:12:04.497 15.709 - 15.825: 84.6479% ( 8) 00:12:04.497 15.825 - 15.942: 84.7800% ( 10) 00:12:04.497 15.942 - 16.058: 84.8725% ( 7) 00:12:04.497 16.058 - 16.175: 84.9782% ( 8) 00:12:04.497 16.175 - 16.291: 85.0575% ( 6) 00:12:04.497 16.291 - 16.407: 85.1764% ( 9) 00:12:04.497 16.407 - 16.524: 85.3217% ( 11) 00:12:04.497 16.524 - 16.640: 85.4274% ( 8) 00:12:04.497 16.640 - 16.756: 85.4802% ( 4) 00:12:04.497 16.756 - 16.873: 85.5331% ( 4) 00:12:04.497 16.873 - 16.989: 85.5859% ( 4) 00:12:04.497 16.989 - 17.105: 85.6256% ( 3) 00:12:04.497 17.105 - 17.222: 85.6916% ( 5) 00:12:04.497 17.222 - 17.338: 85.7577% ( 5) 00:12:04.497 17.338 - 17.455: 85.8370% ( 6) 00:12:04.497 17.455 - 17.571: 85.9427% ( 8) 00:12:04.497 17.571 - 17.687: 85.9691% ( 2) 00:12:04.497 17.804 - 17.920: 86.0351% ( 5) 00:12:04.497 17.920 - 18.036: 86.0748% ( 3) 00:12:04.497 18.036 - 18.153: 86.1276% ( 4) 00:12:04.497 18.153 - 18.269: 86.1408% ( 1) 00:12:04.497 18.269 - 18.385: 86.1673% ( 2) 00:12:04.497 18.385 - 18.502: 86.1937% ( 2) 00:12:04.497 18.502 - 18.618: 86.2333% ( 3) 00:12:04.497 18.618 - 18.735: 86.2465% ( 1) 00:12:04.497 18.735 - 18.851: 86.2862% ( 3) 00:12:04.497 18.851 - 18.967: 86.3258% ( 3) 00:12:04.497 18.967 - 19.084: 86.3390% ( 1) 00:12:04.497 19.084 - 19.200: 86.3654% ( 2) 00:12:04.497 19.200 - 19.316: 86.4579% ( 7) 00:12:04.497 19.316 - 19.433: 86.5240% ( 5) 00:12:04.497 19.433 - 19.549: 86.6429% ( 9) 00:12:04.497 19.549 - 19.665: 86.7486% ( 8) 00:12:04.497 19.665 - 19.782: 86.8146% ( 5) 00:12:04.497 19.782 - 19.898: 86.8807% ( 5) 00:12:04.497 19.898 - 20.015: 86.9203% ( 3) 00:12:04.497 20.015 - 20.131: 86.9732% ( 4) 00:12:04.497 20.131 - 20.247: 87.0392% ( 5) 00:12:04.497 20.247 - 20.364: 87.0789% ( 3) 00:12:04.497 20.364 - 20.480: 87.1449% ( 5) 00:12:04.497 20.480 - 20.596: 87.2110% ( 5) 00:12:04.497 20.596 - 20.713: 87.3035% ( 7) 00:12:04.497 20.713 - 20.829: 87.3299% ( 2) 00:12:04.497 20.829 - 20.945: 87.3960% ( 5) 00:12:04.497 20.945 - 21.062: 87.4752% ( 6) 00:12:04.497 21.062 - 21.178: 87.5017% ( 2) 00:12:04.497 21.178 - 21.295: 87.5677% ( 5) 00:12:04.497 21.295 - 21.411: 87.6338% ( 5) 00:12:04.497 21.411 - 21.527: 87.6602% ( 2) 00:12:04.497 21.527 - 21.644: 87.7263% ( 5) 00:12:04.497 21.644 - 21.760: 87.7923% ( 5) 00:12:04.497 21.760 - 21.876: 87.8187% ( 2) 00:12:04.497 21.876 - 21.993: 87.9244% ( 8) 00:12:04.497 21.993 - 22.109: 88.0169% ( 7) 00:12:04.497 22.109 - 22.225: 88.0565% ( 3) 00:12:04.497 22.225 - 22.342: 88.1490% ( 7) 00:12:04.497 22.342 - 22.458: 88.1755% ( 2) 00:12:04.497 22.458 - 22.575: 88.2547% ( 6) 00:12:04.497 22.575 - 22.691: 88.3340% ( 6) 00:12:04.497 22.691 - 22.807: 88.5982% ( 20) 00:12:04.497 22.807 - 22.924: 88.8096% ( 16) 00:12:04.497 22.924 - 23.040: 89.0210% ( 16) 00:12:04.497 23.040 - 23.156: 89.1795% ( 12) 00:12:04.497 23.156 - 23.273: 89.6948% ( 39) 00:12:04.497 23.273 - 23.389: 90.3290% ( 48) 00:12:04.497 23.389 - 23.505: 90.8971% ( 43) 00:12:04.497 23.505 - 23.622: 91.6502% ( 57) 00:12:04.497 23.622 - 23.738: 92.6278% ( 74) 00:12:04.497 23.738 - 23.855: 93.6980% ( 81) 00:12:04.497 23.855 - 23.971: 94.6360% ( 71) 00:12:04.497 23.971 - 24.087: 95.4155% ( 59) 00:12:04.497 24.087 - 24.204: 95.9704% ( 42) 00:12:04.497 24.204 - 24.320: 96.5253% ( 42) 00:12:04.497 24.320 - 24.436: 96.8688% ( 26) 00:12:04.497 24.436 - 24.553: 97.1595% ( 22) 00:12:04.497 24.553 - 24.669: 97.4633% ( 23) 00:12:04.497 24.669 - 24.785: 97.6879% ( 17) 00:12:04.497 24.785 - 24.902: 97.8201% ( 10) 00:12:04.497 24.902 - 25.018: 97.9125% ( 7) 00:12:04.497 25.018 - 25.135: 98.0711% ( 12) 00:12:04.497 25.135 - 25.251: 98.1239% ( 4) 00:12:04.497 25.251 - 25.367: 98.1768% ( 4) 00:12:04.497 25.367 - 25.484: 98.2560% ( 6) 00:12:04.497 25.484 - 25.600: 98.3089% ( 4) 00:12:04.497 25.600 - 25.716: 98.3221% ( 1) 00:12:04.497 25.716 - 25.833: 98.3485% ( 2) 00:12:04.497 25.833 - 25.949: 98.3617% ( 1) 00:12:04.497 25.949 - 26.065: 98.3750% ( 1) 00:12:04.497 26.065 - 26.182: 98.4014% ( 2) 00:12:04.497 26.182 - 26.298: 98.4278% ( 2) 00:12:04.497 26.298 - 26.415: 98.5071% ( 6) 00:12:04.497 26.415 - 26.531: 98.5203% ( 1) 00:12:04.497 26.531 - 26.647: 98.5467% ( 2) 00:12:04.497 26.647 - 26.764: 98.5599% ( 1) 00:12:04.497 26.764 - 26.880: 98.5731% ( 1) 00:12:04.497 27.113 - 27.229: 98.5863% ( 1) 00:12:04.497 27.229 - 27.345: 98.6128% ( 2) 00:12:04.497 27.345 - 27.462: 98.6392% ( 2) 00:12:04.497 27.462 - 27.578: 98.6524% ( 1) 00:12:04.497 27.811 - 27.927: 98.6656% ( 1) 00:12:04.497 27.927 - 28.044: 98.6788% ( 1) 00:12:04.497 28.276 - 28.393: 98.7052% ( 2) 00:12:04.497 28.509 - 28.625: 98.7317% ( 2) 00:12:04.497 28.742 - 28.858: 98.7449% ( 1) 00:12:04.497 29.091 - 29.207: 98.7977% ( 4) 00:12:04.497 29.207 - 29.324: 98.8109% ( 1) 00:12:04.497 29.324 - 29.440: 98.8242% ( 1) 00:12:04.497 29.440 - 29.556: 98.8506% ( 2) 00:12:04.497 29.556 - 29.673: 98.9034% ( 4) 00:12:04.497 29.789 - 30.022: 98.9166% ( 1) 00:12:04.497 30.022 - 30.255: 98.9959% ( 6) 00:12:04.497 30.255 - 30.487: 99.0752% ( 6) 00:12:04.497 30.487 - 30.720: 99.1016% ( 2) 00:12:04.497 30.720 - 30.953: 99.1148% ( 1) 00:12:04.497 30.953 - 31.185: 99.1280% ( 1) 00:12:04.497 31.185 - 31.418: 99.2073% ( 6) 00:12:04.497 31.418 - 31.651: 99.2469% ( 3) 00:12:04.497 31.651 - 31.884: 99.2866% ( 3) 00:12:04.497 31.884 - 32.116: 99.2998% ( 1) 00:12:04.497 32.116 - 32.349: 99.3790% ( 6) 00:12:04.497 32.349 - 32.582: 99.4187% ( 3) 00:12:04.497 32.582 - 32.815: 99.4715% ( 4) 00:12:04.497 33.047 - 33.280: 99.4980% ( 2) 00:12:04.497 33.280 - 33.513: 99.5376% ( 3) 00:12:04.497 33.513 - 33.745: 99.5508% ( 1) 00:12:04.497 33.745 - 33.978: 99.5772% ( 2) 00:12:04.497 34.211 - 34.444: 99.5904% ( 1) 00:12:04.497 34.676 - 34.909: 99.6036% ( 1) 00:12:04.497 34.909 - 35.142: 99.6169% ( 1) 00:12:04.497 35.375 - 35.607: 99.6301% ( 1) 00:12:04.497 35.607 - 35.840: 99.6433% ( 1) 00:12:04.497 37.702 - 37.935: 99.6565% ( 1) 00:12:04.497 38.167 - 38.400: 99.6697% ( 1) 00:12:04.497 38.865 - 39.098: 99.6829% ( 1) 00:12:04.497 39.098 - 39.331: 99.6961% ( 1) 00:12:04.497 39.331 - 39.564: 99.7358% ( 3) 00:12:04.498 39.564 - 39.796: 99.7622% ( 2) 00:12:04.498 40.495 - 40.727: 99.7754% ( 1) 00:12:04.498 40.727 - 40.960: 99.7886% ( 1) 00:12:04.498 41.425 - 41.658: 99.8150% ( 2) 00:12:04.498 42.589 - 42.822: 99.8282% ( 1) 00:12:04.498 44.451 - 44.684: 99.8415% ( 1) 00:12:04.498 45.382 - 45.615: 99.8547% ( 1) 00:12:04.498 45.847 - 46.080: 99.8679% ( 1) 00:12:04.498 47.244 - 47.476: 99.8811% ( 1) 00:12:04.498 49.571 - 49.804: 99.8943% ( 1) 00:12:04.498 52.131 - 52.364: 99.9075% ( 1) 00:12:04.498 56.320 - 56.553: 99.9207% ( 1) 00:12:04.498 57.018 - 57.251: 99.9339% ( 1) 00:12:04.498 60.044 - 60.509: 99.9472% ( 1) 00:12:04.498 86.109 - 86.575: 99.9604% ( 1) 00:12:04.498 86.575 - 87.040: 99.9736% ( 1) 00:12:04.498 102.865 - 103.331: 99.9868% ( 1) 00:12:04.498 284.858 - 286.720: 100.0000% ( 1) 00:12:04.498 00:12:04.498 ************************************ 00:12:04.498 END TEST nvme_overhead 00:12:04.498 ************************************ 00:12:04.498 00:12:04.498 real 0m1.352s 00:12:04.498 user 0m1.116s 00:12:04.498 sys 0m0.178s 00:12:04.498 09:08:59 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.498 09:08:59 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:12:04.498 09:08:59 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:04.498 09:08:59 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:04.498 09:08:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.498 09:08:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.498 ************************************ 00:12:04.498 START TEST nvme_arbitration 00:12:04.498 ************************************ 00:12:04.498 09:08:59 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:07.781 Initializing NVMe Controllers 00:12:07.781 Attached to 0000:00:10.0 00:12:07.781 Attached to 0000:00:11.0 00:12:07.781 Attached to 0000:00:13.0 00:12:07.781 Attached to 0000:00:12.0 00:12:07.781 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:12:07.781 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:12:07.781 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:12:07.781 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:12:07.781 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:12:07.781 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:12:07.781 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:12:07.781 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:12:07.781 Initialization complete. Launching workers. 00:12:07.781 Starting thread on core 1 with urgent priority queue 00:12:07.781 Starting thread on core 2 with urgent priority queue 00:12:07.781 Starting thread on core 3 with urgent priority queue 00:12:07.781 Starting thread on core 0 with urgent priority queue 00:12:07.781 QEMU NVMe Ctrl (12340 ) core 0: 640.00 IO/s 156.25 secs/100000 ios 00:12:07.781 QEMU NVMe Ctrl (12342 ) core 0: 640.00 IO/s 156.25 secs/100000 ios 00:12:07.781 QEMU NVMe Ctrl (12341 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:12:07.781 QEMU NVMe Ctrl (12342 ) core 1: 704.00 IO/s 142.05 secs/100000 ios 00:12:07.781 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:12:07.781 QEMU NVMe Ctrl (12342 ) core 3: 618.67 IO/s 161.64 secs/100000 ios 00:12:07.781 ======================================================== 00:12:07.781 00:12:07.781 00:12:07.781 real 0m3.391s 00:12:07.781 user 0m9.330s 00:12:07.781 sys 0m0.161s 00:12:07.781 ************************************ 00:12:07.781 END TEST nvme_arbitration 00:12:07.781 ************************************ 00:12:07.781 09:09:02 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.781 09:09:02 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:12:07.781 09:09:02 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:07.781 09:09:02 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:07.781 09:09:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.781 09:09:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.781 ************************************ 00:12:07.781 START TEST nvme_single_aen 00:12:07.781 ************************************ 00:12:07.781 09:09:02 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:12:08.348 Asynchronous Event Request test 00:12:08.348 Attached to 0000:00:10.0 00:12:08.348 Attached to 0000:00:11.0 00:12:08.348 Attached to 0000:00:13.0 00:12:08.348 Attached to 0000:00:12.0 00:12:08.348 Reset controller to setup AER completions for this process 00:12:08.348 Registering asynchronous event callbacks... 00:12:08.348 Getting orig temperature thresholds of all controllers 00:12:08.349 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:08.349 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:08.349 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:08.349 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:08.349 Setting all controllers temperature threshold low to trigger AER 00:12:08.349 Waiting for all controllers temperature threshold to be set lower 00:12:08.349 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:08.349 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:08.349 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:08.349 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:08.349 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:08.349 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:08.349 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:08.349 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:08.349 Waiting for all controllers to trigger AER and reset threshold 00:12:08.349 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:08.349 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:08.349 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:08.349 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:08.349 Cleaning up... 00:12:08.349 ************************************ 00:12:08.349 END TEST nvme_single_aen 00:12:08.349 ************************************ 00:12:08.349 00:12:08.349 real 0m0.353s 00:12:08.349 user 0m0.129s 00:12:08.349 sys 0m0.176s 00:12:08.349 09:09:03 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.349 09:09:03 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:12:08.349 09:09:03 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:12:08.349 09:09:03 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.349 09:09:03 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.349 09:09:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:08.349 ************************************ 00:12:08.349 START TEST nvme_doorbell_aers 00:12:08.349 ************************************ 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:08.349 09:09:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:08.607 [2024-11-20 09:09:03.641438] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:18.580 Executing: test_write_invalid_db 00:12:18.580 Waiting for AER completion... 00:12:18.580 Failure: test_write_invalid_db 00:12:18.580 00:12:18.581 Executing: test_invalid_db_write_overflow_sq 00:12:18.581 Waiting for AER completion... 00:12:18.581 Failure: test_invalid_db_write_overflow_sq 00:12:18.581 00:12:18.581 Executing: test_invalid_db_write_overflow_cq 00:12:18.581 Waiting for AER completion... 00:12:18.581 Failure: test_invalid_db_write_overflow_cq 00:12:18.581 00:12:18.581 09:09:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:18.581 09:09:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:18.581 [2024-11-20 09:09:13.683695] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:28.548 Executing: test_write_invalid_db 00:12:28.548 Waiting for AER completion... 00:12:28.548 Failure: test_write_invalid_db 00:12:28.548 00:12:28.548 Executing: test_invalid_db_write_overflow_sq 00:12:28.548 Waiting for AER completion... 00:12:28.548 Failure: test_invalid_db_write_overflow_sq 00:12:28.548 00:12:28.548 Executing: test_invalid_db_write_overflow_cq 00:12:28.548 Waiting for AER completion... 00:12:28.548 Failure: test_invalid_db_write_overflow_cq 00:12:28.548 00:12:28.548 09:09:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:28.548 09:09:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:28.808 [2024-11-20 09:09:23.753818] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:38.780 Executing: test_write_invalid_db 00:12:38.781 Waiting for AER completion... 00:12:38.781 Failure: test_write_invalid_db 00:12:38.781 00:12:38.781 Executing: test_invalid_db_write_overflow_sq 00:12:38.781 Waiting for AER completion... 00:12:38.781 Failure: test_invalid_db_write_overflow_sq 00:12:38.781 00:12:38.781 Executing: test_invalid_db_write_overflow_cq 00:12:38.781 Waiting for AER completion... 00:12:38.781 Failure: test_invalid_db_write_overflow_cq 00:12:38.781 00:12:38.781 09:09:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:12:38.781 09:09:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:38.781 [2024-11-20 09:09:33.802591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:48.749 Executing: test_write_invalid_db 00:12:48.749 Waiting for AER completion... 00:12:48.749 Failure: test_write_invalid_db 00:12:48.749 00:12:48.749 Executing: test_invalid_db_write_overflow_sq 00:12:48.749 Waiting for AER completion... 00:12:48.749 Failure: test_invalid_db_write_overflow_sq 00:12:48.749 00:12:48.749 Executing: test_invalid_db_write_overflow_cq 00:12:48.749 Waiting for AER completion... 00:12:48.749 Failure: test_invalid_db_write_overflow_cq 00:12:48.749 00:12:48.749 00:12:48.749 real 0m40.285s 00:12:48.749 user 0m34.125s 00:12:48.749 sys 0m5.747s 00:12:48.749 09:09:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.749 09:09:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:48.749 ************************************ 00:12:48.749 END TEST nvme_doorbell_aers 00:12:48.749 ************************************ 00:12:48.749 09:09:43 nvme -- nvme/nvme.sh@97 -- # uname 00:12:48.749 09:09:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:48.749 09:09:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:48.749 09:09:43 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:48.749 09:09:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.749 09:09:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.749 ************************************ 00:12:48.749 START TEST nvme_multi_aen 00:12:48.749 ************************************ 00:12:48.749 09:09:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:49.008 [2024-11-20 09:09:43.886591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.886705] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.886726] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.888699] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.888740] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.888756] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.890426] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.890697] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.890720] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.892374] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.892422] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 [2024-11-20 09:09:43.892439] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64757) is not found. Dropping the request. 00:12:49.008 Child process pid: 65274 00:12:49.267 [Child] Asynchronous Event Request test 00:12:49.267 [Child] Attached to 0000:00:10.0 00:12:49.267 [Child] Attached to 0000:00:11.0 00:12:49.267 [Child] Attached to 0000:00:13.0 00:12:49.267 [Child] Attached to 0000:00:12.0 00:12:49.267 [Child] Registering asynchronous event callbacks... 00:12:49.267 [Child] Getting orig temperature thresholds of all controllers 00:12:49.267 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:49.267 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 [Child] Cleaning up... 00:12:49.267 Asynchronous Event Request test 00:12:49.267 Attached to 0000:00:10.0 00:12:49.267 Attached to 0000:00:11.0 00:12:49.267 Attached to 0000:00:13.0 00:12:49.267 Attached to 0000:00:12.0 00:12:49.267 Reset controller to setup AER completions for this process 00:12:49.267 Registering asynchronous event callbacks... 00:12:49.267 Getting orig temperature thresholds of all controllers 00:12:49.267 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:49.267 Setting all controllers temperature threshold low to trigger AER 00:12:49.267 Waiting for all controllers temperature threshold to be set lower 00:12:49.267 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:49.267 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:49.267 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:49.267 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:49.267 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:49.267 Waiting for all controllers to trigger AER and reset threshold 00:12:49.267 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:49.267 Cleaning up... 00:12:49.267 00:12:49.267 real 0m0.695s 00:12:49.267 user 0m0.241s 00:12:49.267 sys 0m0.329s 00:12:49.267 09:09:44 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.267 09:09:44 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 ************************************ 00:12:49.267 END TEST nvme_multi_aen 00:12:49.267 ************************************ 00:12:49.267 09:09:44 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:49.267 09:09:44 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:49.267 09:09:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.267 09:09:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.267 ************************************ 00:12:49.267 START TEST nvme_startup 00:12:49.267 ************************************ 00:12:49.267 09:09:44 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:49.835 Initializing NVMe Controllers 00:12:49.835 Attached to 0000:00:10.0 00:12:49.835 Attached to 0000:00:11.0 00:12:49.835 Attached to 0000:00:13.0 00:12:49.835 Attached to 0000:00:12.0 00:12:49.835 Initialization complete. 00:12:49.835 Time used:224581.266 (us). 00:12:49.835 00:12:49.835 real 0m0.326s 00:12:49.835 user 0m0.108s 00:12:49.835 sys 0m0.168s 00:12:49.835 09:09:44 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.835 ************************************ 00:12:49.835 END TEST nvme_startup 00:12:49.835 ************************************ 00:12:49.835 09:09:44 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:49.835 09:09:44 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:49.835 09:09:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.835 09:09:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.835 09:09:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.835 ************************************ 00:12:49.835 START TEST nvme_multi_secondary 00:12:49.835 ************************************ 00:12:49.835 09:09:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:49.835 09:09:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65330 00:12:49.835 09:09:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:49.835 09:09:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65331 00:12:49.835 09:09:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:49.835 09:09:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:53.124 Initializing NVMe Controllers 00:12:53.124 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:53.124 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:53.124 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:53.124 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:53.124 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:53.124 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:53.124 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:53.124 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:53.124 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:53.124 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:53.124 Initialization complete. Launching workers. 00:12:53.124 ======================================================== 00:12:53.124 Latency(us) 00:12:53.124 Device Information : IOPS MiB/s Average min max 00:12:53.124 PCIE (0000:00:10.0) NSID 1 from core 2: 2227.16 8.70 7181.66 1760.69 14495.13 00:12:53.124 PCIE (0000:00:11.0) NSID 1 from core 2: 2227.16 8.70 7183.80 1565.05 15617.13 00:12:53.124 PCIE (0000:00:13.0) NSID 1 from core 2: 2227.16 8.70 7183.85 1837.67 17237.50 00:12:53.124 PCIE (0000:00:12.0) NSID 1 from core 2: 2227.16 8.70 7183.16 1637.99 17348.79 00:12:53.124 PCIE (0000:00:12.0) NSID 2 from core 2: 2227.16 8.70 7183.82 1808.46 16328.55 00:12:53.124 PCIE (0000:00:12.0) NSID 3 from core 2: 2227.16 8.70 7184.07 1813.65 14279.33 00:12:53.124 ======================================================== 00:12:53.124 Total : 13362.97 52.20 7183.39 1565.05 17348.79 00:12:53.124 00:12:53.382 09:09:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65330 00:12:53.382 Initializing NVMe Controllers 00:12:53.382 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:53.382 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:53.382 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:53.382 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:53.382 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:53.382 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:53.382 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:53.382 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:53.382 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:53.382 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:53.382 Initialization complete. Launching workers. 00:12:53.382 ======================================================== 00:12:53.382 Latency(us) 00:12:53.382 Device Information : IOPS MiB/s Average min max 00:12:53.382 PCIE (0000:00:10.0) NSID 1 from core 1: 4936.79 19.28 3238.89 1549.66 8042.46 00:12:53.382 PCIE (0000:00:11.0) NSID 1 from core 1: 4936.79 19.28 3240.24 1525.07 7774.13 00:12:53.382 PCIE (0000:00:13.0) NSID 1 from core 1: 4936.79 19.28 3240.14 1500.21 6706.65 00:12:53.382 PCIE (0000:00:12.0) NSID 1 from core 1: 4936.79 19.28 3240.02 1550.51 7166.06 00:12:53.382 PCIE (0000:00:12.0) NSID 2 from core 1: 4936.79 19.28 3239.92 1425.41 6738.22 00:12:53.382 PCIE (0000:00:12.0) NSID 3 from core 1: 4936.79 19.28 3239.73 1453.11 7099.07 00:12:53.382 ======================================================== 00:12:53.382 Total : 29620.75 115.71 3239.82 1425.41 8042.46 00:12:53.382 00:12:55.285 Initializing NVMe Controllers 00:12:55.285 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:55.285 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:55.285 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:55.285 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:55.285 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:55.285 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:55.285 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:55.285 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:55.285 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:55.285 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:55.285 Initialization complete. Launching workers. 00:12:55.285 ======================================================== 00:12:55.285 Latency(us) 00:12:55.285 Device Information : IOPS MiB/s Average min max 00:12:55.285 PCIE (0000:00:10.0) NSID 1 from core 0: 7205.21 28.15 2218.88 965.70 8478.76 00:12:55.285 PCIE (0000:00:11.0) NSID 1 from core 0: 7205.21 28.15 2220.15 998.94 7675.25 00:12:55.285 PCIE (0000:00:13.0) NSID 1 from core 0: 7205.21 28.15 2220.08 917.57 7757.47 00:12:55.285 PCIE (0000:00:12.0) NSID 1 from core 0: 7205.21 28.15 2220.00 905.81 8304.63 00:12:55.285 PCIE (0000:00:12.0) NSID 2 from core 0: 7205.21 28.15 2219.94 860.24 8051.08 00:12:55.285 PCIE (0000:00:12.0) NSID 3 from core 0: 7205.21 28.15 2219.86 754.56 8254.22 00:12:55.285 ======================================================== 00:12:55.285 Total : 43231.23 168.87 2219.82 754.56 8478.76 00:12:55.285 00:12:55.285 09:09:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65331 00:12:55.285 09:09:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65400 00:12:55.285 09:09:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:55.285 09:09:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65401 00:12:55.285 09:09:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:55.285 09:09:50 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:58.570 Initializing NVMe Controllers 00:12:58.570 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:58.570 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:58.570 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:58.570 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:58.570 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:58.570 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:58.570 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:58.570 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:58.570 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:58.570 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:58.570 Initialization complete. Launching workers. 00:12:58.570 ======================================================== 00:12:58.570 Latency(us) 00:12:58.570 Device Information : IOPS MiB/s Average min max 00:12:58.570 PCIE (0000:00:10.0) NSID 1 from core 0: 4721.90 18.44 3386.25 1049.33 8755.92 00:12:58.570 PCIE (0000:00:11.0) NSID 1 from core 0: 4721.90 18.44 3387.85 1110.01 8512.37 00:12:58.570 PCIE (0000:00:13.0) NSID 1 from core 0: 4721.90 18.44 3387.70 1136.38 8507.53 00:12:58.570 PCIE (0000:00:12.0) NSID 1 from core 0: 4721.90 18.44 3387.56 1102.06 8146.65 00:12:58.570 PCIE (0000:00:12.0) NSID 2 from core 0: 4721.90 18.44 3387.40 1111.89 8390.34 00:12:58.570 PCIE (0000:00:12.0) NSID 3 from core 0: 4721.90 18.44 3387.29 1073.79 8878.09 00:12:58.570 ======================================================== 00:12:58.571 Total : 28331.38 110.67 3387.34 1049.33 8878.09 00:12:58.571 00:12:58.571 Initializing NVMe Controllers 00:12:58.571 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:58.571 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:58.571 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:58.571 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:58.571 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:58.571 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:58.571 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:58.571 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:58.571 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:58.571 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:58.571 Initialization complete. Launching workers. 00:12:58.571 ======================================================== 00:12:58.571 Latency(us) 00:12:58.571 Device Information : IOPS MiB/s Average min max 00:12:58.571 PCIE (0000:00:10.0) NSID 1 from core 1: 5278.33 20.62 3029.25 1056.67 7871.63 00:12:58.571 PCIE (0000:00:11.0) NSID 1 from core 1: 5278.33 20.62 3030.51 1061.48 7005.40 00:12:58.571 PCIE (0000:00:13.0) NSID 1 from core 1: 5278.33 20.62 3030.31 939.91 6978.71 00:12:58.571 PCIE (0000:00:12.0) NSID 1 from core 1: 5278.33 20.62 3030.08 930.59 7260.79 00:12:58.571 PCIE (0000:00:12.0) NSID 2 from core 1: 5278.33 20.62 3029.86 885.50 7455.01 00:12:58.571 PCIE (0000:00:12.0) NSID 3 from core 1: 5278.33 20.62 3029.66 801.48 8141.05 00:12:58.571 ======================================================== 00:12:58.571 Total : 31670.00 123.71 3029.94 801.48 8141.05 00:12:58.571 00:13:01.103 Initializing NVMe Controllers 00:13:01.103 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:13:01.103 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:13:01.103 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:13:01.103 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:13:01.103 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:13:01.103 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:13:01.103 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:13:01.103 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:13:01.103 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:13:01.103 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:13:01.103 Initialization complete. Launching workers. 00:13:01.103 ======================================================== 00:13:01.103 Latency(us) 00:13:01.103 Device Information : IOPS MiB/s Average min max 00:13:01.103 PCIE (0000:00:10.0) NSID 1 from core 2: 3525.73 13.77 4535.49 991.19 18229.09 00:13:01.103 PCIE (0000:00:11.0) NSID 1 from core 2: 3525.73 13.77 4534.11 1013.19 17901.98 00:13:01.103 PCIE (0000:00:13.0) NSID 1 from core 2: 3525.73 13.77 4532.84 1000.22 17490.11 00:13:01.103 PCIE (0000:00:12.0) NSID 1 from core 2: 3525.73 13.77 4533.50 1050.19 17134.18 00:13:01.103 PCIE (0000:00:12.0) NSID 2 from core 2: 3525.73 13.77 4533.18 1025.60 16735.47 00:13:01.103 PCIE (0000:00:12.0) NSID 3 from core 2: 3525.73 13.77 4533.53 1010.44 19838.01 00:13:01.103 ======================================================== 00:13:01.103 Total : 21154.41 82.63 4533.78 991.19 19838.01 00:13:01.103 00:13:01.103 ************************************ 00:13:01.103 END TEST nvme_multi_secondary 00:13:01.103 ************************************ 00:13:01.103 09:09:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65400 00:13:01.103 09:09:55 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65401 00:13:01.103 00:13:01.103 real 0m11.073s 00:13:01.103 user 0m18.708s 00:13:01.103 sys 0m1.027s 00:13:01.103 09:09:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.103 09:09:55 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:13:01.103 09:09:55 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:13:01.103 09:09:55 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:13:01.103 09:09:55 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64338 ]] 00:13:01.103 09:09:55 nvme -- common/autotest_common.sh@1094 -- # kill 64338 00:13:01.103 09:09:55 nvme -- common/autotest_common.sh@1095 -- # wait 64338 00:13:01.103 [2024-11-20 09:09:55.848904] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.103 [2024-11-20 09:09:55.849834] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.103 [2024-11-20 09:09:55.849884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.103 [2024-11-20 09:09:55.849907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.852203] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.852257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.852276] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.852295] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.854518] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.854567] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.854598] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.854616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.856820] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.857027] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.857051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 [2024-11-20 09:09:55.857070] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65273) is not found. Dropping the request. 00:13:01.104 09:09:56 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:13:01.104 09:09:56 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:13:01.104 09:09:56 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:01.104 09:09:56 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.104 09:09:56 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.104 09:09:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.104 ************************************ 00:13:01.104 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:01.104 ************************************ 00:13:01.104 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:01.104 * Looking for test storage... 00:13:01.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:01.104 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:01.104 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:01.104 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.364 --rc genhtml_branch_coverage=1 00:13:01.364 --rc genhtml_function_coverage=1 00:13:01.364 --rc genhtml_legend=1 00:13:01.364 --rc geninfo_all_blocks=1 00:13:01.364 --rc geninfo_unexecuted_blocks=1 00:13:01.364 00:13:01.364 ' 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.364 --rc genhtml_branch_coverage=1 00:13:01.364 --rc genhtml_function_coverage=1 00:13:01.364 --rc genhtml_legend=1 00:13:01.364 --rc geninfo_all_blocks=1 00:13:01.364 --rc geninfo_unexecuted_blocks=1 00:13:01.364 00:13:01.364 ' 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.364 --rc genhtml_branch_coverage=1 00:13:01.364 --rc genhtml_function_coverage=1 00:13:01.364 --rc genhtml_legend=1 00:13:01.364 --rc geninfo_all_blocks=1 00:13:01.364 --rc geninfo_unexecuted_blocks=1 00:13:01.364 00:13:01.364 ' 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:01.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.364 --rc genhtml_branch_coverage=1 00:13:01.364 --rc genhtml_function_coverage=1 00:13:01.364 --rc genhtml_legend=1 00:13:01.364 --rc geninfo_all_blocks=1 00:13:01.364 --rc geninfo_unexecuted_blocks=1 00:13:01.364 00:13:01.364 ' 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:01.364 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65567 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65567 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65567 ']' 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.365 09:09:56 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:01.365 [2024-11-20 09:09:56.473171] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:13:01.365 [2024-11-20 09:09:56.473700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65567 ] 00:13:01.624 [2024-11-20 09:09:56.696843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.882 [2024-11-20 09:09:56.855689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.882 [2024-11-20 09:09:56.855764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.882 [2024-11-20 09:09:56.855992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.882 [2024-11-20 09:09:56.856051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:02.820 nvme0n1 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_LBVY6.txt 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:02.820 true 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732093797 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65596 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:02.820 09:09:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:04.811 [2024-11-20 09:09:59.892267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:13:04.811 [2024-11-20 09:09:59.892792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:04.811 [2024-11-20 09:09:59.892835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:04.811 [2024-11-20 09:09:59.892857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.811 [2024-11-20 09:09:59.895584] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:13:04.811 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65596 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65596 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65596 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.811 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_LBVY6.txt 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:05.070 09:09:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_LBVY6.txt 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65567 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65567 ']' 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65567 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65567 00:13:05.070 killing process with pid 65567 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65567' 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65567 00:13:05.070 09:10:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65567 00:13:06.973 09:10:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:06.973 09:10:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:06.973 00:13:06.973 real 0m5.999s 00:13:06.973 user 0m20.905s 00:13:06.973 sys 0m0.900s 00:13:06.973 09:10:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.973 ************************************ 00:13:06.973 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:06.974 ************************************ 00:13:06.974 09:10:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:13:07.232 09:10:02 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:07.232 09:10:02 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:07.232 09:10:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.232 09:10:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.232 09:10:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.232 ************************************ 00:13:07.232 START TEST nvme_fio 00:13:07.232 ************************************ 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:07.232 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:07.232 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:07.491 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:07.491 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:13:07.750 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:07.750 09:10:02 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:07.750 09:10:02 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:13:08.013 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:08.013 fio-3.35 00:13:08.013 Starting 1 thread 00:13:11.300 00:13:11.300 test: (groupid=0, jobs=1): err= 0: pid=65746: Wed Nov 20 09:10:05 2024 00:13:11.300 read: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec) 00:13:11.300 slat (usec): min=4, max=106, avg= 7.70, stdev= 3.56 00:13:11.300 clat (usec): min=262, max=12006, avg=4385.92, stdev=661.22 00:13:11.300 lat (usec): min=269, max=12060, avg=4393.62, stdev=662.09 00:13:11.300 clat percentiles (usec): 00:13:11.300 | 1.00th=[ 3654], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3884], 00:13:11.300 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4359], 00:13:11.300 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5342], 95.00th=[ 5735], 00:13:11.300 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 8586], 99.95th=[10421], 00:13:11.300 | 99.99th=[11863] 00:13:11.300 bw ( KiB/s): min=48080, max=64304, per=95.99%, avg=55688.00, stdev=8158.84, samples=3 00:13:11.300 iops : min=12020, max=16076, avg=13922.00, stdev=2039.71, samples=3 00:13:11.300 write: IOPS=14.5k, BW=56.7MiB/s (59.5MB/s)(114MiB/2001msec); 0 zone resets 00:13:11.300 slat (nsec): min=4557, max=69963, avg=7845.36, stdev=3554.90 00:13:11.300 clat (usec): min=236, max=11847, avg=4396.97, stdev=663.24 00:13:11.300 lat (usec): min=242, max=11867, avg=4404.82, stdev=664.13 00:13:11.300 clat percentiles (usec): 00:13:11.300 | 1.00th=[ 3654], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3884], 00:13:11.300 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4359], 00:13:11.300 | 70.00th=[ 4686], 80.00th=[ 4948], 90.00th=[ 5342], 95.00th=[ 5735], 00:13:11.300 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 9110], 99.95th=[10552], 00:13:11.300 | 99.99th=[11600] 00:13:11.300 bw ( KiB/s): min=48640, max=64312, per=95.90%, avg=55701.33, stdev=7950.05, samples=3 00:13:11.300 iops : min=12160, max=16078, avg=13925.33, stdev=1987.51, samples=3 00:13:11.300 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:13:11.300 lat (msec) : 2=0.06%, 4=38.43%, 10=61.40%, 20=0.07% 00:13:11.300 cpu : usr=98.65%, sys=0.20%, ctx=4, majf=0, minf=607 00:13:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:11.300 issued rwts: total=29022,29056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:11.300 00:13:11.300 Run status group 0 (all jobs): 00:13:11.300 READ: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:13:11.300 WRITE: bw=56.7MiB/s (59.5MB/s), 56.7MiB/s-56.7MiB/s (59.5MB/s-59.5MB/s), io=114MiB (119MB), run=2001-2001msec 00:13:11.300 ----------------------------------------------------- 00:13:11.300 Suppressions used: 00:13:11.300 count bytes template 00:13:11.300 1 32 /usr/src/fio/parse.c 00:13:11.300 1 8 libtcmalloc_minimal.so 00:13:11.300 ----------------------------------------------------- 00:13:11.300 00:13:11.300 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:11.300 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:11.300 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:11.300 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:11.558 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:11.559 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:13:11.817 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:11.817 09:10:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:11.817 09:10:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:13:12.076 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:12.076 fio-3.35 00:13:12.076 Starting 1 thread 00:13:15.388 00:13:15.388 test: (groupid=0, jobs=1): err= 0: pid=65812: Wed Nov 20 09:10:10 2024 00:13:15.388 read: IOPS=17.4k, BW=68.2MiB/s (71.5MB/s)(136MiB/2001msec) 00:13:15.388 slat (nsec): min=4634, max=57979, avg=6042.09, stdev=1792.51 00:13:15.388 clat (usec): min=314, max=9904, avg=3645.95, stdev=401.37 00:13:15.388 lat (usec): min=319, max=9948, avg=3651.99, stdev=401.89 00:13:15.388 clat percentiles (usec): 00:13:15.388 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:13:15.388 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3621], 00:13:15.388 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 4228], 95.00th=[ 4424], 00:13:15.388 | 99.00th=[ 4621], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8029], 00:13:15.388 | 99.99th=[ 9634] 00:13:15.388 bw ( KiB/s): min=63616, max=72080, per=98.80%, avg=68949.33, stdev=4642.05, samples=3 00:13:15.388 iops : min=15904, max=18020, avg=17237.33, stdev=1160.51, samples=3 00:13:15.388 write: IOPS=17.5k, BW=68.2MiB/s (71.6MB/s)(137MiB/2001msec); 0 zone resets 00:13:15.388 slat (nsec): min=4773, max=55070, avg=6233.03, stdev=1862.54 00:13:15.388 clat (usec): min=305, max=9668, avg=3659.58, stdev=408.85 00:13:15.388 lat (usec): min=310, max=9682, avg=3665.82, stdev=409.31 00:13:15.388 clat percentiles (usec): 00:13:15.388 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3458], 00:13:15.388 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3621], 00:13:15.388 | 70.00th=[ 3654], 80.00th=[ 3752], 90.00th=[ 4228], 95.00th=[ 4424], 00:13:15.388 | 99.00th=[ 4686], 99.50th=[ 6128], 99.90th=[ 7570], 99.95th=[ 8225], 00:13:15.388 | 99.99th=[ 9503] 00:13:15.388 bw ( KiB/s): min=63400, max=71936, per=98.54%, avg=68850.67, stdev=4734.12, samples=3 00:13:15.388 iops : min=15850, max=17984, avg=17212.67, stdev=1183.53, samples=3 00:13:15.388 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:15.388 lat (msec) : 2=0.06%, 4=88.17%, 10=11.74% 00:13:15.388 cpu : usr=98.95%, sys=0.15%, ctx=8, majf=0, minf=608 00:13:15.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:15.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.388 issued rwts: total=34911,34954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.388 00:13:15.388 Run status group 0 (all jobs): 00:13:15.388 READ: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=136MiB (143MB), run=2001-2001msec 00:13:15.388 WRITE: bw=68.2MiB/s (71.6MB/s), 68.2MiB/s-68.2MiB/s (71.6MB/s-71.6MB/s), io=137MiB (143MB), run=2001-2001msec 00:13:15.648 ----------------------------------------------------- 00:13:15.648 Suppressions used: 00:13:15.648 count bytes template 00:13:15.648 1 32 /usr/src/fio/parse.c 00:13:15.648 1 8 libtcmalloc_minimal.so 00:13:15.648 ----------------------------------------------------- 00:13:15.648 00:13:15.648 09:10:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:15.648 09:10:10 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:15.648 09:10:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:15.648 09:10:10 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:15.905 09:10:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:13:15.905 09:10:10 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:16.165 09:10:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:16.165 09:10:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:16.165 09:10:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:13:16.425 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:16.425 fio-3.35 00:13:16.425 Starting 1 thread 00:13:19.719 00:13:19.719 test: (groupid=0, jobs=1): err= 0: pid=65878: Wed Nov 20 09:10:14 2024 00:13:19.719 read: IOPS=13.6k, BW=53.0MiB/s (55.6MB/s)(106MiB/2001msec) 00:13:19.719 slat (nsec): min=5099, max=85699, avg=7410.66, stdev=3177.90 00:13:19.719 clat (usec): min=251, max=9483, avg=4694.94, stdev=379.63 00:13:19.719 lat (usec): min=258, max=9569, avg=4702.35, stdev=380.08 00:13:19.719 clat percentiles (usec): 00:13:19.719 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4490], 00:13:19.719 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:13:19.719 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5538], 00:13:19.719 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 6718], 99.95th=[ 8225], 00:13:19.719 | 99.99th=[ 9372] 00:13:19.719 bw ( KiB/s): min=50672, max=54520, per=98.01%, avg=53181.33, stdev=2174.77, samples=3 00:13:19.719 iops : min=12668, max=13630, avg=13295.33, stdev=543.69, samples=3 00:13:19.719 write: IOPS=13.6k, BW=52.9MiB/s (55.5MB/s)(106MiB/2001msec); 0 zone resets 00:13:19.719 slat (nsec): min=4971, max=52438, avg=7373.51, stdev=3108.60 00:13:19.719 clat (usec): min=383, max=9356, avg=4707.46, stdev=376.32 00:13:19.719 lat (usec): min=390, max=9375, avg=4714.83, stdev=376.77 00:13:19.719 clat percentiles (usec): 00:13:19.719 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4490], 00:13:19.719 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:13:19.719 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5604], 00:13:19.719 | 99.00th=[ 5866], 99.50th=[ 5932], 99.90th=[ 6849], 99.95th=[ 8160], 00:13:19.719 | 99.99th=[ 9110] 00:13:19.719 bw ( KiB/s): min=51008, max=54720, per=98.31%, avg=53288.00, stdev=1996.01, samples=3 00:13:19.719 iops : min=12752, max=13680, avg=13322.00, stdev=499.00, samples=3 00:13:19.719 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:19.719 lat (msec) : 2=0.05%, 4=0.73%, 10=99.19% 00:13:19.719 cpu : usr=98.95%, sys=0.10%, ctx=4, majf=0, minf=608 00:13:19.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:19.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:19.719 issued rwts: total=27145,27116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:19.719 00:13:19.719 Run status group 0 (all jobs): 00:13:19.719 READ: bw=53.0MiB/s (55.6MB/s), 53.0MiB/s-53.0MiB/s (55.6MB/s-55.6MB/s), io=106MiB (111MB), run=2001-2001msec 00:13:19.719 WRITE: bw=52.9MiB/s (55.5MB/s), 52.9MiB/s-52.9MiB/s (55.5MB/s-55.5MB/s), io=106MiB (111MB), run=2001-2001msec 00:13:19.719 ----------------------------------------------------- 00:13:19.719 Suppressions used: 00:13:19.719 count bytes template 00:13:19.719 1 32 /usr/src/fio/parse.c 00:13:19.719 1 8 libtcmalloc_minimal.so 00:13:19.719 ----------------------------------------------------- 00:13:19.719 00:13:19.719 09:10:14 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:19.719 09:10:14 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:19.719 09:10:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:19.719 09:10:14 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:19.978 09:10:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:13:19.978 09:10:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:20.237 09:10:15 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:13:20.237 09:10:15 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:20.237 09:10:15 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:13:20.496 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:20.496 fio-3.35 00:13:20.496 Starting 1 thread 00:13:24.688 00:13:24.688 test: (groupid=0, jobs=1): err= 0: pid=65940: Wed Nov 20 09:10:19 2024 00:13:24.688 read: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(119MiB/2001msec) 00:13:24.689 slat (usec): min=4, max=233, avg= 6.42, stdev= 3.83 00:13:24.689 clat (usec): min=280, max=8755, avg=4174.84, stdev=524.25 00:13:24.689 lat (usec): min=286, max=8807, avg=4181.26, stdev=524.80 00:13:24.689 clat percentiles (usec): 00:13:24.689 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3752], 00:13:24.689 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4113], 60.00th=[ 4228], 00:13:24.689 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5211], 00:13:24.689 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 6915], 99.95th=[ 7177], 00:13:24.689 | 99.99th=[ 8225] 00:13:24.689 bw ( KiB/s): min=54208, max=62736, per=97.68%, avg=59496.00, stdev=4618.16, samples=3 00:13:24.689 iops : min=13552, max=15684, avg=14874.00, stdev=1154.54, samples=3 00:13:24.689 write: IOPS=15.3k, BW=59.6MiB/s (62.5MB/s)(119MiB/2001msec); 0 zone resets 00:13:24.689 slat (usec): min=4, max=449, avg= 6.61, stdev= 4.53 00:13:24.689 clat (usec): min=290, max=8435, avg=4190.61, stdev=523.41 00:13:24.689 lat (usec): min=295, max=8454, avg=4197.22, stdev=523.94 00:13:24.689 clat percentiles (usec): 00:13:24.689 | 1.00th=[ 3359], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3785], 00:13:24.689 | 30.00th=[ 3884], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4228], 00:13:24.689 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5211], 00:13:24.689 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 6980], 99.95th=[ 7242], 00:13:24.689 | 99.99th=[ 8029] 00:13:24.689 bw ( KiB/s): min=54496, max=61976, per=97.16%, avg=59282.67, stdev=4156.22, samples=3 00:13:24.689 iops : min=13624, max=15494, avg=14820.67, stdev=1039.05, samples=3 00:13:24.689 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:24.689 lat (msec) : 2=0.06%, 4=39.68%, 10=60.22% 00:13:24.689 cpu : usr=98.25%, sys=0.25%, ctx=16, majf=0, minf=605 00:13:24.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:24.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:24.689 issued rwts: total=30470,30522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:24.689 00:13:24.689 Run status group 0 (all jobs): 00:13:24.689 READ: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=119MiB (125MB), run=2001-2001msec 00:13:24.689 WRITE: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=119MiB (125MB), run=2001-2001msec 00:13:24.689 ----------------------------------------------------- 00:13:24.689 Suppressions used: 00:13:24.689 count bytes template 00:13:24.689 1 32 /usr/src/fio/parse.c 00:13:24.689 1 8 libtcmalloc_minimal.so 00:13:24.689 ----------------------------------------------------- 00:13:24.689 00:13:24.689 09:10:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:24.689 09:10:19 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:13:24.689 00:13:24.689 real 0m17.513s 00:13:24.689 user 0m13.833s 00:13:24.689 sys 0m2.630s 00:13:24.689 ************************************ 00:13:24.689 END TEST nvme_fio 00:13:24.689 ************************************ 00:13:24.689 09:10:19 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.689 09:10:19 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:13:24.689 00:13:24.689 real 1m32.164s 00:13:24.689 user 3m46.512s 00:13:24.689 sys 0m15.997s 00:13:24.689 09:10:19 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.689 09:10:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.689 ************************************ 00:13:24.689 END TEST nvme 00:13:24.689 ************************************ 00:13:24.689 09:10:19 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:24.689 09:10:19 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:24.689 09:10:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.689 09:10:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.689 09:10:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.689 ************************************ 00:13:24.689 START TEST nvme_scc 00:13:24.689 ************************************ 00:13:24.689 09:10:19 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:24.689 * Looking for test storage... 00:13:24.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@345 -- # : 1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@368 -- # return 0 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.949 --rc genhtml_branch_coverage=1 00:13:24.949 --rc genhtml_function_coverage=1 00:13:24.949 --rc genhtml_legend=1 00:13:24.949 --rc geninfo_all_blocks=1 00:13:24.949 --rc geninfo_unexecuted_blocks=1 00:13:24.949 00:13:24.949 ' 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.949 --rc genhtml_branch_coverage=1 00:13:24.949 --rc genhtml_function_coverage=1 00:13:24.949 --rc genhtml_legend=1 00:13:24.949 --rc geninfo_all_blocks=1 00:13:24.949 --rc geninfo_unexecuted_blocks=1 00:13:24.949 00:13:24.949 ' 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.949 --rc genhtml_branch_coverage=1 00:13:24.949 --rc genhtml_function_coverage=1 00:13:24.949 --rc genhtml_legend=1 00:13:24.949 --rc geninfo_all_blocks=1 00:13:24.949 --rc geninfo_unexecuted_blocks=1 00:13:24.949 00:13:24.949 ' 00:13:24.949 09:10:19 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.949 --rc genhtml_branch_coverage=1 00:13:24.949 --rc genhtml_function_coverage=1 00:13:24.949 --rc genhtml_legend=1 00:13:24.949 --rc geninfo_all_blocks=1 00:13:24.949 --rc geninfo_unexecuted_blocks=1 00:13:24.949 00:13:24.949 ' 00:13:24.949 09:10:19 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.949 09:10:19 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.949 09:10:19 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.949 09:10:19 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.949 09:10:19 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.949 09:10:19 nvme_scc -- paths/export.sh@5 -- # export PATH 00:13:24.949 09:10:19 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:24.949 09:10:19 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:13:24.949 09:10:19 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.949 09:10:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:13:24.949 09:10:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:24.949 09:10:19 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:24.949 09:10:19 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:25.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.468 Waiting for block devices as requested 00:13:25.468 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.727 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.727 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:31.005 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:31.005 09:10:25 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:31.005 09:10:25 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:31.005 09:10:25 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:31.005 09:10:25 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.005 09:10:25 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:31.005 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:31.006 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.007 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:31.008 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:31.009 09:10:25 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.009 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.010 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:31.011 09:10:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:31.011 09:10:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:31.011 09:10:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.011 09:10:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.011 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.012 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.013 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.276 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:31.277 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:31.278 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.279 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.280 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:31.281 09:10:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:31.281 09:10:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:31.281 09:10:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.281 09:10:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.281 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.282 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.283 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:31.284 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.285 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.550 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.550 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:31.550 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:31.550 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.550 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.550 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.551 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.552 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.553 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:31.554 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.555 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.556 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.557 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.558 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:31.559 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:31.560 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:31.561 09:10:26 nvme_scc -- scripts/common.sh@18 -- # local i 00:13:31.561 09:10:26 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:31.561 09:10:26 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:31.561 09:10:26 nvme_scc -- scripts/common.sh@27 -- # return 0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@18 -- # shift 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.561 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:31.562 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.563 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:31.564 09:10:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:13:31.564 09:10:26 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:13:31.823 09:10:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:31.823 09:10:26 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:31.823 09:10:26 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:32.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:32.957 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.957 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.957 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.957 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:32.957 09:10:27 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:32.957 09:10:27 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:32.957 09:10:27 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.957 09:10:27 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:32.957 ************************************ 00:13:32.957 START TEST nvme_simple_copy 00:13:32.957 ************************************ 00:13:32.957 09:10:27 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:33.216 Initializing NVMe Controllers 00:13:33.216 Attaching to 0000:00:10.0 00:13:33.216 Controller supports SCC. Attached to 0000:00:10.0 00:13:33.216 Namespace ID: 1 size: 6GB 00:13:33.216 Initialization complete. 00:13:33.216 00:13:33.216 Controller QEMU NVMe Ctrl (12340 ) 00:13:33.216 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:33.216 Namespace Block Size:4096 00:13:33.216 Writing LBAs 0 to 63 with Random Data 00:13:33.216 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:33.216 LBAs matching Written Data: 64 00:13:33.216 00:13:33.216 real 0m0.353s 00:13:33.216 user 0m0.153s 00:13:33.216 sys 0m0.097s 00:13:33.216 09:10:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.216 09:10:28 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:33.216 ************************************ 00:13:33.216 END TEST nvme_simple_copy 00:13:33.216 ************************************ 00:13:33.476 00:13:33.476 real 0m8.618s 00:13:33.476 user 0m1.630s 00:13:33.476 sys 0m1.879s 00:13:33.476 09:10:28 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.476 09:10:28 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:33.476 ************************************ 00:13:33.476 END TEST nvme_scc 00:13:33.476 ************************************ 00:13:33.476 09:10:28 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:33.476 09:10:28 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:33.476 09:10:28 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:33.476 09:10:28 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:33.476 09:10:28 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:33.476 09:10:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.476 09:10:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.476 09:10:28 -- common/autotest_common.sh@10 -- # set +x 00:13:33.476 ************************************ 00:13:33.476 START TEST nvme_fdp 00:13:33.476 ************************************ 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:33.476 * Looking for test storage... 00:13:33.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.476 09:10:28 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.476 --rc genhtml_branch_coverage=1 00:13:33.476 --rc genhtml_function_coverage=1 00:13:33.476 --rc genhtml_legend=1 00:13:33.476 --rc geninfo_all_blocks=1 00:13:33.476 --rc geninfo_unexecuted_blocks=1 00:13:33.476 00:13:33.476 ' 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.476 --rc genhtml_branch_coverage=1 00:13:33.476 --rc genhtml_function_coverage=1 00:13:33.476 --rc genhtml_legend=1 00:13:33.476 --rc geninfo_all_blocks=1 00:13:33.476 --rc geninfo_unexecuted_blocks=1 00:13:33.476 00:13:33.476 ' 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.476 --rc genhtml_branch_coverage=1 00:13:33.476 --rc genhtml_function_coverage=1 00:13:33.476 --rc genhtml_legend=1 00:13:33.476 --rc geninfo_all_blocks=1 00:13:33.476 --rc geninfo_unexecuted_blocks=1 00:13:33.476 00:13:33.476 ' 00:13:33.476 09:10:28 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.476 --rc genhtml_branch_coverage=1 00:13:33.476 --rc genhtml_function_coverage=1 00:13:33.476 --rc genhtml_legend=1 00:13:33.476 --rc geninfo_all_blocks=1 00:13:33.476 --rc geninfo_unexecuted_blocks=1 00:13:33.476 00:13:33.476 ' 00:13:33.476 09:10:28 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:33.476 09:10:28 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:33.476 09:10:28 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:33.740 09:10:28 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:33.740 09:10:28 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:33.740 09:10:28 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.740 09:10:28 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.740 09:10:28 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.740 09:10:28 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.740 09:10:28 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.741 09:10:28 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.741 09:10:28 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.741 09:10:28 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:33.741 09:10:28 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:33.741 09:10:28 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:33.741 09:10:28 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:33.741 09:10:28 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:34.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:34.324 Waiting for block devices as requested 00:13:34.324 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.324 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:34.583 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:39.865 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:39.865 09:10:34 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:39.865 09:10:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:39.865 09:10:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:39.865 09:10:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:39.865 09:10:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.865 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.866 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.867 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:39.868 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.869 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.870 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:39.871 09:10:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:39.871 09:10:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:39.871 09:10:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:39.871 09:10:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.871 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.872 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:39.873 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:39.874 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.875 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.876 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.877 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:39.878 09:10:34 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:39.878 09:10:34 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:39.878 09:10:34 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:39.878 09:10:34 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:39.878 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:39.879 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.880 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:39.881 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.882 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.883 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:39.884 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:39.885 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:39.886 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:40.152 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:40.153 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:34 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.154 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.155 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.156 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:40.157 09:10:35 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:40.157 09:10:35 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:40.157 09:10:35 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:40.157 09:10:35 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:40.157 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:40.158 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.159 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:40.160 09:10:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:40.160 09:10:35 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:40.161 09:10:35 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:40.161 09:10:35 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:40.161 09:10:35 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:40.161 09:10:35 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:40.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:41.295 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.295 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.295 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.295 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:41.295 09:10:36 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:41.295 09:10:36 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:41.295 09:10:36 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.295 09:10:36 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:41.295 ************************************ 00:13:41.295 START TEST nvme_flexible_data_placement 00:13:41.295 ************************************ 00:13:41.295 09:10:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:41.863 Initializing NVMe Controllers 00:13:41.863 Attaching to 0000:00:13.0 00:13:41.863 Controller supports FDP Attached to 0000:00:13.0 00:13:41.863 Namespace ID: 1 Endurance Group ID: 1 00:13:41.863 Initialization complete. 00:13:41.863 00:13:41.863 ================================== 00:13:41.863 == FDP tests for Namespace: #01 == 00:13:41.863 ================================== 00:13:41.863 00:13:41.863 Get Feature: FDP: 00:13:41.863 ================= 00:13:41.863 Enabled: Yes 00:13:41.863 FDP configuration Index: 0 00:13:41.863 00:13:41.863 FDP configurations log page 00:13:41.863 =========================== 00:13:41.863 Number of FDP configurations: 1 00:13:41.863 Version: 0 00:13:41.863 Size: 112 00:13:41.863 FDP Configuration Descriptor: 0 00:13:41.863 Descriptor Size: 96 00:13:41.863 Reclaim Group Identifier format: 2 00:13:41.863 FDP Volatile Write Cache: Not Present 00:13:41.863 FDP Configuration: Valid 00:13:41.863 Vendor Specific Size: 0 00:13:41.863 Number of Reclaim Groups: 2 00:13:41.863 Number of Recalim Unit Handles: 8 00:13:41.863 Max Placement Identifiers: 128 00:13:41.863 Number of Namespaces Suppprted: 256 00:13:41.863 Reclaim unit Nominal Size: 6000000 bytes 00:13:41.863 Estimated Reclaim Unit Time Limit: Not Reported 00:13:41.863 RUH Desc #000: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #001: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #002: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #003: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #004: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #005: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #006: RUH Type: Initially Isolated 00:13:41.863 RUH Desc #007: RUH Type: Initially Isolated 00:13:41.863 00:13:41.863 FDP reclaim unit handle usage log page 00:13:41.863 ====================================== 00:13:41.863 Number of Reclaim Unit Handles: 8 00:13:41.863 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:41.863 RUH Usage Desc #001: RUH Attributes: Unused 00:13:41.863 RUH Usage Desc #002: RUH Attributes: Unused 00:13:41.863 RUH Usage Desc #003: RUH Attributes: Unused 00:13:41.863 RUH Usage Desc #004: RUH Attributes: Unused 00:13:41.863 RUH Usage Desc #005: RUH Attributes: Unused 00:13:41.863 RUH Usage Desc #006: RUH Attributes: Unused 00:13:41.863 RUH Usage Desc #007: RUH Attributes: Unused 00:13:41.863 00:13:41.863 FDP statistics log page 00:13:41.863 ======================= 00:13:41.863 Host bytes with metadata written: 849432576 00:13:41.863 Media bytes with metadata written: 849551360 00:13:41.863 Media bytes erased: 0 00:13:41.863 00:13:41.863 FDP Reclaim unit handle status 00:13:41.863 ============================== 00:13:41.863 Number of RUHS descriptors: 2 00:13:41.863 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000035eb 00:13:41.863 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:41.863 00:13:41.863 FDP write on placement id: 0 success 00:13:41.863 00:13:41.863 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:41.863 00:13:41.863 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:41.863 00:13:41.863 Get Feature: FDP Events for Placement handle: #0 00:13:41.863 ======================== 00:13:41.863 Number of FDP Events: 6 00:13:41.863 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:41.863 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:41.864 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:41.864 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:41.864 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:41.864 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:41.864 00:13:41.864 FDP events log page 00:13:41.864 =================== 00:13:41.864 Number of FDP events: 1 00:13:41.864 FDP Event #0: 00:13:41.864 Event Type: RU Not Written to Capacity 00:13:41.864 Placement Identifier: Valid 00:13:41.864 NSID: Valid 00:13:41.864 Location: Valid 00:13:41.864 Placement Identifier: 0 00:13:41.864 Event Timestamp: 8 00:13:41.864 Namespace Identifier: 1 00:13:41.864 Reclaim Group Identifier: 0 00:13:41.864 Reclaim Unit Handle Identifier: 0 00:13:41.864 00:13:41.864 FDP test passed 00:13:41.864 00:13:41.864 real 0m0.308s 00:13:41.864 user 0m0.108s 00:13:41.864 sys 0m0.097s 00:13:41.864 09:10:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.864 ************************************ 00:13:41.864 END TEST nvme_flexible_data_placement 00:13:41.864 ************************************ 00:13:41.864 09:10:36 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:41.864 00:13:41.864 real 0m8.372s 00:13:41.864 user 0m1.507s 00:13:41.864 sys 0m1.834s 00:13:41.864 09:10:36 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.864 ************************************ 00:13:41.864 END TEST nvme_fdp 00:13:41.864 ************************************ 00:13:41.864 09:10:36 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:41.864 09:10:36 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:41.864 09:10:36 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:41.864 09:10:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.864 09:10:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.864 09:10:36 -- common/autotest_common.sh@10 -- # set +x 00:13:41.864 ************************************ 00:13:41.864 START TEST nvme_rpc 00:13:41.864 ************************************ 00:13:41.864 09:10:36 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:41.864 * Looking for test storage... 00:13:41.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:41.864 09:10:36 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.864 09:10:36 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.864 09:10:36 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.123 09:10:36 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:42.123 09:10:36 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.123 09:10:37 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.123 --rc genhtml_branch_coverage=1 00:13:42.123 --rc genhtml_function_coverage=1 00:13:42.123 --rc genhtml_legend=1 00:13:42.123 --rc geninfo_all_blocks=1 00:13:42.123 --rc geninfo_unexecuted_blocks=1 00:13:42.123 00:13:42.123 ' 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.123 --rc genhtml_branch_coverage=1 00:13:42.123 --rc genhtml_function_coverage=1 00:13:42.123 --rc genhtml_legend=1 00:13:42.123 --rc geninfo_all_blocks=1 00:13:42.123 --rc geninfo_unexecuted_blocks=1 00:13:42.123 00:13:42.123 ' 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.123 --rc genhtml_branch_coverage=1 00:13:42.123 --rc genhtml_function_coverage=1 00:13:42.123 --rc genhtml_legend=1 00:13:42.123 --rc geninfo_all_blocks=1 00:13:42.123 --rc geninfo_unexecuted_blocks=1 00:13:42.123 00:13:42.123 ' 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.123 --rc genhtml_branch_coverage=1 00:13:42.123 --rc genhtml_function_coverage=1 00:13:42.123 --rc genhtml_legend=1 00:13:42.123 --rc geninfo_all_blocks=1 00:13:42.123 --rc geninfo_unexecuted_blocks=1 00:13:42.123 00:13:42.123 ' 00:13:42.123 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.123 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:42.123 09:10:37 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:42.123 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:42.123 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67335 00:13:42.123 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:42.123 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:42.124 09:10:37 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67335 00:13:42.124 09:10:37 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67335 ']' 00:13:42.124 09:10:37 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.124 09:10:37 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.124 09:10:37 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.124 09:10:37 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.124 09:10:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.124 [2024-11-20 09:10:37.215823] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:13:42.124 [2024-11-20 09:10:37.216016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67335 ] 00:13:42.383 [2024-11-20 09:10:37.408406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:42.642 [2024-11-20 09:10:37.567502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.642 [2024-11-20 09:10:37.567513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.583 09:10:38 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.583 09:10:38 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:43.583 09:10:38 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:43.852 Nvme0n1 00:13:43.852 09:10:38 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:43.852 09:10:38 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:44.111 request: 00:13:44.111 { 00:13:44.111 "bdev_name": "Nvme0n1", 00:13:44.111 "filename": "non_existing_file", 00:13:44.111 "method": "bdev_nvme_apply_firmware", 00:13:44.111 "req_id": 1 00:13:44.111 } 00:13:44.111 Got JSON-RPC error response 00:13:44.111 response: 00:13:44.111 { 00:13:44.111 "code": -32603, 00:13:44.111 "message": "open file failed." 00:13:44.111 } 00:13:44.111 09:10:39 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:44.111 09:10:39 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:44.111 09:10:39 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:44.370 09:10:39 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:44.370 09:10:39 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67335 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67335 ']' 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67335 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67335 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.370 killing process with pid 67335 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67335' 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67335 00:13:44.370 09:10:39 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67335 00:13:46.275 00:13:46.275 real 0m4.353s 00:13:46.275 user 0m8.265s 00:13:46.275 sys 0m0.774s 00:13:46.275 09:10:41 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.275 09:10:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.275 ************************************ 00:13:46.275 END TEST nvme_rpc 00:13:46.275 ************************************ 00:13:46.275 09:10:41 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:46.275 09:10:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:46.275 09:10:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.275 09:10:41 -- common/autotest_common.sh@10 -- # set +x 00:13:46.275 ************************************ 00:13:46.275 START TEST nvme_rpc_timeouts 00:13:46.275 ************************************ 00:13:46.275 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:46.275 * Looking for test storage... 00:13:46.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:46.275 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.275 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.275 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.533 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.533 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.534 09:10:41 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.534 --rc genhtml_branch_coverage=1 00:13:46.534 --rc genhtml_function_coverage=1 00:13:46.534 --rc genhtml_legend=1 00:13:46.534 --rc geninfo_all_blocks=1 00:13:46.534 --rc geninfo_unexecuted_blocks=1 00:13:46.534 00:13:46.534 ' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.534 --rc genhtml_branch_coverage=1 00:13:46.534 --rc genhtml_function_coverage=1 00:13:46.534 --rc genhtml_legend=1 00:13:46.534 --rc geninfo_all_blocks=1 00:13:46.534 --rc geninfo_unexecuted_blocks=1 00:13:46.534 00:13:46.534 ' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.534 --rc genhtml_branch_coverage=1 00:13:46.534 --rc genhtml_function_coverage=1 00:13:46.534 --rc genhtml_legend=1 00:13:46.534 --rc geninfo_all_blocks=1 00:13:46.534 --rc geninfo_unexecuted_blocks=1 00:13:46.534 00:13:46.534 ' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.534 --rc genhtml_branch_coverage=1 00:13:46.534 --rc genhtml_function_coverage=1 00:13:46.534 --rc genhtml_legend=1 00:13:46.534 --rc geninfo_all_blocks=1 00:13:46.534 --rc geninfo_unexecuted_blocks=1 00:13:46.534 00:13:46.534 ' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67406 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67406 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67442 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67442 00:13:46.534 09:10:41 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67442 ']' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.534 09:10:41 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:46.534 [2024-11-20 09:10:41.543151] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:13:46.534 [2024-11-20 09:10:41.543335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67442 ] 00:13:46.793 [2024-11-20 09:10:41.725167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.793 [2024-11-20 09:10:41.841196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.793 [2024-11-20 09:10:41.841210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.729 09:10:42 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.729 09:10:42 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:47.729 Checking default timeout settings: 00:13:47.729 09:10:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:47.729 09:10:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:48.296 Making settings changes with rpc: 00:13:48.296 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:48.296 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:48.296 Check default vs. modified settings: 00:13:48.296 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:48.296 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:48.862 Setting action_on_timeout is changed as expected. 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:48.862 Setting timeout_us is changed as expected. 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:48.862 Setting timeout_admin_us is changed as expected. 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67406 /tmp/settings_modified_67406 00:13:48.862 09:10:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67442 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67442 ']' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67442 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67442 00:13:48.862 killing process with pid 67442 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67442' 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67442 00:13:48.862 09:10:43 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67442 00:13:51.393 RPC TIMEOUT SETTING TEST PASSED. 00:13:51.393 09:10:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:51.393 00:13:51.393 real 0m5.128s 00:13:51.393 user 0m9.834s 00:13:51.393 sys 0m0.824s 00:13:51.393 ************************************ 00:13:51.393 END TEST nvme_rpc_timeouts 00:13:51.393 ************************************ 00:13:51.393 09:10:46 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.393 09:10:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:51.393 09:10:46 -- spdk/autotest.sh@239 -- # uname -s 00:13:51.393 09:10:46 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:51.393 09:10:46 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:51.393 09:10:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.393 09:10:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.393 09:10:46 -- common/autotest_common.sh@10 -- # set +x 00:13:51.393 ************************************ 00:13:51.393 START TEST sw_hotplug 00:13:51.393 ************************************ 00:13:51.393 09:10:46 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:51.393 * Looking for test storage... 00:13:51.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:51.393 09:10:46 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:51.393 09:10:46 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:13:51.393 09:10:46 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:51.652 09:10:46 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:51.652 09:10:46 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.653 09:10:46 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:51.653 09:10:46 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.653 09:10:46 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.653 09:10:46 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.653 09:10:46 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:51.653 09:10:46 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.653 09:10:46 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:51.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.653 --rc genhtml_branch_coverage=1 00:13:51.653 --rc genhtml_function_coverage=1 00:13:51.653 --rc genhtml_legend=1 00:13:51.653 --rc geninfo_all_blocks=1 00:13:51.653 --rc geninfo_unexecuted_blocks=1 00:13:51.653 00:13:51.653 ' 00:13:51.653 09:10:46 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:51.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.653 --rc genhtml_branch_coverage=1 00:13:51.653 --rc genhtml_function_coverage=1 00:13:51.653 --rc genhtml_legend=1 00:13:51.653 --rc geninfo_all_blocks=1 00:13:51.653 --rc geninfo_unexecuted_blocks=1 00:13:51.653 00:13:51.653 ' 00:13:51.653 09:10:46 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:51.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.653 --rc genhtml_branch_coverage=1 00:13:51.653 --rc genhtml_function_coverage=1 00:13:51.653 --rc genhtml_legend=1 00:13:51.653 --rc geninfo_all_blocks=1 00:13:51.653 --rc geninfo_unexecuted_blocks=1 00:13:51.653 00:13:51.653 ' 00:13:51.653 09:10:46 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:51.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.653 --rc genhtml_branch_coverage=1 00:13:51.653 --rc genhtml_function_coverage=1 00:13:51.653 --rc genhtml_legend=1 00:13:51.653 --rc geninfo_all_blocks=1 00:13:51.653 --rc geninfo_unexecuted_blocks=1 00:13:51.653 00:13:51.653 ' 00:13:51.653 09:10:46 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:51.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:52.172 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:52.172 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:52.172 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:52.172 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:52.172 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:52.172 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:52.172 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:52.172 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.172 09:10:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:52.173 09:10:47 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:52.173 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:52.173 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:52.173 09:10:47 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:52.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:52.740 Waiting for block devices as requested 00:13:52.740 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.999 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.999 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:52.999 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:58.295 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:58.295 09:10:53 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:58.295 09:10:53 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:58.554 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:58.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:58.812 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:59.071 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:59.329 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.329 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:59.329 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:59.329 09:10:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68321 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:59.330 09:10:54 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:59.330 09:10:54 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:59.330 09:10:54 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:59.330 09:10:54 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:59.330 09:10:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:59.330 09:10:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:59.589 Initializing NVMe Controllers 00:13:59.589 Attaching to 0000:00:10.0 00:13:59.589 Attaching to 0000:00:11.0 00:13:59.589 Attached to 0000:00:11.0 00:13:59.589 Attached to 0000:00:10.0 00:13:59.589 Initialization complete. Starting I/O... 00:13:59.589 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:59.589 QEMU NVMe Ctrl (12340 ): 2 I/Os completed (+2) 00:13:59.589 00:14:00.966 QEMU NVMe Ctrl (12341 ): 1232 I/Os completed (+1232) 00:14:00.966 QEMU NVMe Ctrl (12340 ): 1243 I/Os completed (+1241) 00:14:00.966 00:14:01.903 QEMU NVMe Ctrl (12341 ): 2848 I/Os completed (+1616) 00:14:01.903 QEMU NVMe Ctrl (12340 ): 2863 I/Os completed (+1620) 00:14:01.903 00:14:02.839 QEMU NVMe Ctrl (12341 ): 4649 I/Os completed (+1801) 00:14:02.839 QEMU NVMe Ctrl (12340 ): 4664 I/Os completed (+1801) 00:14:02.839 00:14:03.776 QEMU NVMe Ctrl (12341 ): 6449 I/Os completed (+1800) 00:14:03.776 QEMU NVMe Ctrl (12340 ): 6483 I/Os completed (+1819) 00:14:03.776 00:14:04.714 QEMU NVMe Ctrl (12341 ): 8258 I/Os completed (+1809) 00:14:04.714 QEMU NVMe Ctrl (12340 ): 8300 I/Os completed (+1817) 00:14:04.714 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:05.648 [2024-11-20 09:11:00.436946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:05.648 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:05.648 [2024-11-20 09:11:00.439050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.439132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.439162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.439188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:05.648 [2024-11-20 09:11:00.442081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.442143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.442169] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.442191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:05.648 [2024-11-20 09:11:00.466626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:05.648 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:05.648 [2024-11-20 09:11:00.469014] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.469232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.469388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.469525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:05.648 [2024-11-20 09:11:00.472564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.472757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.472796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 [2024-11-20 09:11:00.472818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:05.648 Attaching to 0000:00:10.0 00:14:05.648 Attached to 0000:00:10.0 00:14:05.648 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:14:05.648 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:05.648 09:11:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:05.648 Attaching to 0000:00:11.0 00:14:05.907 Attached to 0000:00:11.0 00:14:06.842 QEMU NVMe Ctrl (12340 ): 1816 I/Os completed (+1816) 00:14:06.842 QEMU NVMe Ctrl (12341 ): 1656 I/Os completed (+1656) 00:14:06.842 00:14:07.778 QEMU NVMe Ctrl (12340 ): 3632 I/Os completed (+1816) 00:14:07.778 QEMU NVMe Ctrl (12341 ): 3473 I/Os completed (+1817) 00:14:07.778 00:14:08.713 QEMU NVMe Ctrl (12340 ): 5472 I/Os completed (+1840) 00:14:08.713 QEMU NVMe Ctrl (12341 ): 5340 I/Os completed (+1867) 00:14:08.713 00:14:09.649 QEMU NVMe Ctrl (12340 ): 7292 I/Os completed (+1820) 00:14:09.649 QEMU NVMe Ctrl (12341 ): 7186 I/Os completed (+1846) 00:14:09.649 00:14:10.584 QEMU NVMe Ctrl (12340 ): 9124 I/Os completed (+1832) 00:14:10.584 QEMU NVMe Ctrl (12341 ): 9022 I/Os completed (+1836) 00:14:10.584 00:14:11.959 QEMU NVMe Ctrl (12340 ): 10988 I/Os completed (+1864) 00:14:11.959 QEMU NVMe Ctrl (12341 ): 10907 I/Os completed (+1885) 00:14:11.959 00:14:12.894 QEMU NVMe Ctrl (12340 ): 12828 I/Os completed (+1840) 00:14:12.894 QEMU NVMe Ctrl (12341 ): 12750 I/Os completed (+1843) 00:14:12.894 00:14:13.829 QEMU NVMe Ctrl (12340 ): 14652 I/Os completed (+1824) 00:14:13.829 QEMU NVMe Ctrl (12341 ): 14586 I/Os completed (+1836) 00:14:13.829 00:14:14.803 QEMU NVMe Ctrl (12340 ): 16326 I/Os completed (+1674) 00:14:14.803 QEMU NVMe Ctrl (12341 ): 16278 I/Os completed (+1692) 00:14:14.803 00:14:15.739 QEMU NVMe Ctrl (12340 ): 18150 I/Os completed (+1824) 00:14:15.739 QEMU NVMe Ctrl (12341 ): 18110 I/Os completed (+1832) 00:14:15.739 00:14:16.674 QEMU NVMe Ctrl (12340 ): 19962 I/Os completed (+1812) 00:14:16.674 QEMU NVMe Ctrl (12341 ): 19931 I/Os completed (+1821) 00:14:16.674 00:14:17.621 QEMU NVMe Ctrl (12340 ): 21782 I/Os completed (+1820) 00:14:17.621 QEMU NVMe Ctrl (12341 ): 21763 I/Os completed (+1832) 00:14:17.621 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:17.879 [2024-11-20 09:11:12.775295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:17.879 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:17.879 [2024-11-20 09:11:12.777612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.777902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.778006] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.778106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:17.879 [2024-11-20 09:11:12.781889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.782114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.782319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.782542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:17.879 [2024-11-20 09:11:12.803493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:17.879 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:17.879 [2024-11-20 09:11:12.805716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.805949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.806179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.806349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:17.879 [2024-11-20 09:11:12.809503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.809738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.809963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 [2024-11-20 09:11:12.810172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:17.879 09:11:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:18.138 Attaching to 0000:00:10.0 00:14:18.138 Attached to 0000:00:10.0 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.138 09:11:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:18.138 Attaching to 0000:00:11.0 00:14:18.138 Attached to 0000:00:11.0 00:14:18.706 QEMU NVMe Ctrl (12340 ): 1088 I/Os completed (+1088) 00:14:18.706 QEMU NVMe Ctrl (12341 ): 943 I/Os completed (+943) 00:14:18.706 00:14:19.642 QEMU NVMe Ctrl (12340 ): 2907 I/Os completed (+1819) 00:14:19.642 QEMU NVMe Ctrl (12341 ): 2782 I/Os completed (+1839) 00:14:19.642 00:14:20.577 QEMU NVMe Ctrl (12340 ): 4663 I/Os completed (+1756) 00:14:20.577 QEMU NVMe Ctrl (12341 ): 4561 I/Os completed (+1779) 00:14:20.577 00:14:21.957 QEMU NVMe Ctrl (12340 ): 6551 I/Os completed (+1888) 00:14:21.957 QEMU NVMe Ctrl (12341 ): 6450 I/Os completed (+1889) 00:14:21.957 00:14:22.907 QEMU NVMe Ctrl (12340 ): 8526 I/Os completed (+1975) 00:14:22.907 QEMU NVMe Ctrl (12341 ): 8421 I/Os completed (+1971) 00:14:22.907 00:14:23.855 QEMU NVMe Ctrl (12340 ): 10266 I/Os completed (+1740) 00:14:23.855 QEMU NVMe Ctrl (12341 ): 10200 I/Os completed (+1779) 00:14:23.855 00:14:24.790 QEMU NVMe Ctrl (12340 ): 12050 I/Os completed (+1784) 00:14:24.790 QEMU NVMe Ctrl (12341 ): 12034 I/Os completed (+1834) 00:14:24.790 00:14:25.725 QEMU NVMe Ctrl (12340 ): 13710 I/Os completed (+1660) 00:14:25.725 QEMU NVMe Ctrl (12341 ): 13721 I/Os completed (+1687) 00:14:25.725 00:14:26.660 QEMU NVMe Ctrl (12340 ): 15262 I/Os completed (+1552) 00:14:26.660 QEMU NVMe Ctrl (12341 ): 15333 I/Os completed (+1612) 00:14:26.660 00:14:27.597 QEMU NVMe Ctrl (12340 ): 17038 I/Os completed (+1776) 00:14:27.597 QEMU NVMe Ctrl (12341 ): 17152 I/Os completed (+1819) 00:14:27.597 00:14:28.967 QEMU NVMe Ctrl (12340 ): 18708 I/Os completed (+1670) 00:14:28.967 QEMU NVMe Ctrl (12341 ): 18997 I/Os completed (+1845) 00:14:28.967 00:14:29.899 QEMU NVMe Ctrl (12340 ): 20400 I/Os completed (+1692) 00:14:29.899 QEMU NVMe Ctrl (12341 ): 20808 I/Os completed (+1811) 00:14:29.899 00:14:30.156 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:30.156 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:30.156 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.156 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.156 [2024-11-20 09:11:25.127082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:30.156 Controller removed: QEMU NVMe Ctrl (12340 ) 00:14:30.156 [2024-11-20 09:11:25.129430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.129655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.129842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.129921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:30.156 [2024-11-20 09:11:25.136991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.137055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.137080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.137102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.156 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.156 [2024-11-20 09:11:25.156210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:30.156 Controller removed: QEMU NVMe Ctrl (12341 ) 00:14:30.156 [2024-11-20 09:11:25.158172] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.156 [2024-11-20 09:11:25.158483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 [2024-11-20 09:11:25.158712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 [2024-11-20 09:11:25.158787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:30.157 [2024-11-20 09:11:25.161527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 [2024-11-20 09:11:25.161578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 [2024-11-20 09:11:25.161607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 [2024-11-20 09:11:25.161628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.157 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:14:30.157 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:30.157 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:30.157 EAL: Scan for (pci) bus failed. 00:14:30.157 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:30.157 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:30.157 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:30.413 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:30.413 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:30.413 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:30.413 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:30.413 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:30.413 Attaching to 0000:00:10.0 00:14:30.413 Attached to 0000:00:10.0 00:14:30.413 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:30.414 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:30.414 09:11:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:30.414 Attaching to 0000:00:11.0 00:14:30.414 Attached to 0000:00:11.0 00:14:30.414 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:14:30.414 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:14:30.414 [2024-11-20 09:11:25.451901] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:42.684 09:11:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:42.684 09:11:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:42.684 09:11:37 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.01 00:14:42.684 09:11:37 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.01 00:14:42.684 09:11:37 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:42.684 09:11:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.01 00:14:42.684 09:11:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.01 2 00:14:42.684 remove_attach_helper took 43.01s to complete (handling 2 nvme drive(s)) 09:11:37 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68321 00:14:49.246 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68321) - No such process 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68321 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68863 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:49.246 09:11:43 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68863 00:14:49.246 09:11:43 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68863 ']' 00:14:49.246 09:11:43 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.246 09:11:43 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.246 09:11:43 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.246 09:11:43 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.246 09:11:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:49.246 [2024-11-20 09:11:43.595327] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:49.246 [2024-11-20 09:11:43.595763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68863 ] 00:14:49.246 [2024-11-20 09:11:43.790281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.246 [2024-11-20 09:11:43.946687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:49.814 09:11:44 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:49.814 09:11:44 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:56.379 09:11:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:56.379 09:11:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.379 09:11:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:56.379 09:11:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.379 [2024-11-20 09:11:51.008856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:56.379 [2024-11-20 09:11:51.011735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.011926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.012177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 [2024-11-20 09:11:51.012322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.012418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.012570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 [2024-11-20 09:11:51.012790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.012850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.012870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 [2024-11-20 09:11:51.012903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.012918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.012935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:56.379 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:56.379 [2024-11-20 09:11:51.408809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:56.379 [2024-11-20 09:11:51.411738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.411801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.411826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 [2024-11-20 09:11:51.411853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.411871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.411885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 [2024-11-20 09:11:51.411902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.411916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.411931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.379 [2024-11-20 09:11:51.411945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:56.379 [2024-11-20 09:11:51.411960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:56.379 [2024-11-20 09:11:51.411973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:56.638 09:11:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.638 09:11:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:56.638 09:11:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:56.638 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:56.897 09:11:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:09.098 09:12:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.098 09:12:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:09.098 09:12:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:09.098 09:12:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:09.098 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:09.098 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:09.098 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:09.098 09:12:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.098 09:12:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:09.098 [2024-11-20 09:12:04.009063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:09.098 [2024-11-20 09:12:04.012121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.098 [2024-11-20 09:12:04.012191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.098 [2024-11-20 09:12:04.012215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.098 [2024-11-20 09:12:04.012247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.098 [2024-11-20 09:12:04.012264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.098 [2024-11-20 09:12:04.012281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.098 [2024-11-20 09:12:04.012297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.098 [2024-11-20 09:12:04.012314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.098 [2024-11-20 09:12:04.012328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.098 [2024-11-20 09:12:04.012346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.098 [2024-11-20 09:12:04.012360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.098 [2024-11-20 09:12:04.012376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.098 09:12:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.098 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:09.098 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:09.665 09:12:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.665 09:12:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:09.665 09:12:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:09.665 09:12:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:09.665 [2024-11-20 09:12:04.709109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:09.665 [2024-11-20 09:12:04.712304] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.665 [2024-11-20 09:12:04.712366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.665 [2024-11-20 09:12:04.712392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.665 [2024-11-20 09:12:04.712418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.665 [2024-11-20 09:12:04.712435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.665 [2024-11-20 09:12:04.712447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.665 [2024-11-20 09:12:04.712464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.665 [2024-11-20 09:12:04.712476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.665 [2024-11-20 09:12:04.712490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.665 [2024-11-20 09:12:04.712503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:09.665 [2024-11-20 09:12:04.712517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:09.665 [2024-11-20 09:12:04.712529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:10.232 09:12:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:10.232 09:12:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:10.232 09:12:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:10.232 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:10.490 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:10.490 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:10.490 09:12:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:22.690 09:12:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.690 09:12:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:22.690 09:12:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.690 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:22.691 [2024-11-20 09:12:17.509326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:22.691 [2024-11-20 09:12:17.513230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.691 [2024-11-20 09:12:17.513454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.691 [2024-11-20 09:12:17.513622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.691 [2024-11-20 09:12:17.513917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.691 [2024-11-20 09:12:17.514093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.691 [2024-11-20 09:12:17.514160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.691 [2024-11-20 09:12:17.514181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.691 [2024-11-20 09:12:17.514202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.691 [2024-11-20 09:12:17.514217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.691 [2024-11-20 09:12:17.514254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:22.691 [2024-11-20 09:12:17.514269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.691 [2024-11-20 09:12:17.514288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:22.691 09:12:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.691 09:12:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:22.691 09:12:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:22.691 09:12:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.258 09:12:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.258 09:12:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.258 09:12:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:23.258 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:23.258 [2024-11-20 09:12:18.209311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:23.258 [2024-11-20 09:12:18.212938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.258 [2024-11-20 09:12:18.212998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.258 [2024-11-20 09:12:18.213025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.258 [2024-11-20 09:12:18.213053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.258 [2024-11-20 09:12:18.213073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.258 [2024-11-20 09:12:18.213088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.258 [2024-11-20 09:12:18.213107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.258 [2024-11-20 09:12:18.213120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.258 [2024-11-20 09:12:18.213142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.258 [2024-11-20 09:12:18.213156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:23.258 [2024-11-20 09:12:18.213174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.258 [2024-11-20 09:12:18.213187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:23.827 09:12:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.827 09:12:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:23.827 09:12:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:23.827 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:24.085 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:24.085 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:24.085 09:12:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=46.13 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 46.13 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=46.13 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 46.13 2 00:15:36.286 remove_attach_helper took 46.13s to complete (handling 2 nvme drive(s)) 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:15:36.286 09:12:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:15:36.286 09:12:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:42.845 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.845 09:12:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.845 09:12:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.845 [2024-11-20 09:12:37.176576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:42.845 [2024-11-20 09:12:37.178801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.845 [2024-11-20 09:12:37.178989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.845 [2024-11-20 09:12:37.179178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.845 [2024-11-20 09:12:37.179350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.845 [2024-11-20 09:12:37.179461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.845 [2024-11-20 09:12:37.179597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.845 [2024-11-20 09:12:37.179744] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.846 [2024-11-20 09:12:37.179773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.846 [2024-11-20 09:12:37.179788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.846 [2024-11-20 09:12:37.179806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.846 [2024-11-20 09:12:37.179819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.846 [2024-11-20 09:12:37.179838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.846 09:12:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:42.846 [2024-11-20 09:12:37.676587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:42.846 [2024-11-20 09:12:37.678915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.846 [2024-11-20 09:12:37.679129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.846 [2024-11-20 09:12:37.679167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.846 [2024-11-20 09:12:37.679196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.846 [2024-11-20 09:12:37.679214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.846 [2024-11-20 09:12:37.679227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.846 [2024-11-20 09:12:37.679246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.846 [2024-11-20 09:12:37.679259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.846 [2024-11-20 09:12:37.679274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.846 [2024-11-20 09:12:37.679288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:42.846 [2024-11-20 09:12:37.679302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.846 [2024-11-20 09:12:37.679315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:42.846 09:12:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.846 09:12:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:42.846 09:12:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:42.846 09:12:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:43.104 09:12:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:43.104 09:12:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:43.104 09:12:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:55.330 09:12:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:55.330 09:12:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:55.330 09:12:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:55.330 [2024-11-20 09:12:50.176810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:55.330 09:12:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.330 [2024-11-20 09:12:50.179008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.330 [2024-11-20 09:12:50.179193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.330 [2024-11-20 09:12:50.179344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 09:12:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:55.330 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.330 [2024-11-20 09:12:50.179543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.330 [2024-11-20 09:12:50.179597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.330 [2024-11-20 09:12:50.179800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.330 [2024-11-20 09:12:50.179961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.330 [2024-11-20 09:12:50.180135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.330 [2024-11-20 09:12:50.180274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.330 [2024-11-20 09:12:50.180599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.330 [2024-11-20 09:12:50.180681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.330 [2024-11-20 09:12:50.180764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.330 09:12:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:55.330 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:55.589 [2024-11-20 09:12:50.576811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:55.589 [2024-11-20 09:12:50.579209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.589 [2024-11-20 09:12:50.579420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.589 [2024-11-20 09:12:50.579579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.589 [2024-11-20 09:12:50.579776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.589 [2024-11-20 09:12:50.580033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.589 [2024-11-20 09:12:50.580217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.589 [2024-11-20 09:12:50.580473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.589 [2024-11-20 09:12:50.580749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.589 [2024-11-20 09:12:50.580915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.589 [2024-11-20 09:12:50.581079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:55.589 [2024-11-20 09:12:50.581237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.589 [2024-11-20 09:12:50.581374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:55.849 09:12:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.849 09:12:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:55.849 09:12:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:55.849 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:56.118 09:12:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:56.118 09:12:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:08.394 09:13:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.394 09:13:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:08.394 09:13:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:08.394 [2024-11-20 09:13:03.176989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:16:08.394 [2024-11-20 09:13:03.179717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.394 [2024-11-20 09:13:03.179886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.394 [2024-11-20 09:13:03.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.394 [2024-11-20 09:13:03.180223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.394 [2024-11-20 09:13:03.180247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.394 [2024-11-20 09:13:03.180266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.394 [2024-11-20 09:13:03.180283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.394 [2024-11-20 09:13:03.180303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.394 [2024-11-20 09:13:03.180317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.394 [2024-11-20 09:13:03.180334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.394 [2024-11-20 09:13:03.180348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.394 [2024-11-20 09:13:03.180364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:16:08.394 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:08.395 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:08.395 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:08.395 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:08.395 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:08.395 09:13:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.395 09:13:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:08.395 09:13:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.395 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:16:08.395 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:16:08.960 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:16:08.960 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:16:08.960 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:16:08.960 [2024-11-20 09:13:03.776998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:16:08.960 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:08.960 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:08.960 [2024-11-20 09:13:03.779307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.960 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:08.961 [2024-11-20 09:13:03.779368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.961 [2024-11-20 09:13:03.779393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.961 [2024-11-20 09:13:03.779418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.961 [2024-11-20 09:13:03.779436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.961 [2024-11-20 09:13:03.779449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.961 [2024-11-20 09:13:03.779468] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.961 [2024-11-20 09:13:03.779481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.961 [2024-11-20 09:13:03.779497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.961 [2024-11-20 09:13:03.779510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:08.961 [2024-11-20 09:13:03.779530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.961 [2024-11-20 09:13:03.779542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.961 09:13:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.961 09:13:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:08.961 09:13:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.961 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:16:08.961 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:16:08.961 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:08.961 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:08.961 09:13:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:16:08.961 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:16:08.961 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:08.961 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:16:08.961 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:16:08.961 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:16:09.219 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:16:09.219 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:16:09.219 09:13:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.10 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.10 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.10 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.10 2 00:16:21.493 remove_attach_helper took 45.10s to complete (handling 2 nvme drive(s)) 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:16:21.493 09:13:16 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68863 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68863 ']' 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68863 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68863 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68863' 00:16:21.493 killing process with pid 68863 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68863 00:16:21.493 09:13:16 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68863 00:16:23.400 09:13:18 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:23.659 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:24.227 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.227 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:24.227 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.227 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:24.227 00:16:24.227 real 2m32.876s 00:16:24.227 user 1m54.540s 00:16:24.227 sys 0m18.210s 00:16:24.227 09:13:19 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.227 09:13:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:16:24.227 ************************************ 00:16:24.227 END TEST sw_hotplug 00:16:24.227 ************************************ 00:16:24.227 09:13:19 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:16:24.227 09:13:19 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:24.227 09:13:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:24.227 09:13:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.227 09:13:19 -- common/autotest_common.sh@10 -- # set +x 00:16:24.489 ************************************ 00:16:24.489 START TEST nvme_xnvme 00:16:24.489 ************************************ 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:16:24.489 * Looking for test storage... 00:16:24.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.489 09:13:19 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.489 --rc genhtml_branch_coverage=1 00:16:24.489 --rc genhtml_function_coverage=1 00:16:24.489 --rc genhtml_legend=1 00:16:24.489 --rc geninfo_all_blocks=1 00:16:24.489 --rc geninfo_unexecuted_blocks=1 00:16:24.489 00:16:24.489 ' 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.489 --rc genhtml_branch_coverage=1 00:16:24.489 --rc genhtml_function_coverage=1 00:16:24.489 --rc genhtml_legend=1 00:16:24.489 --rc geninfo_all_blocks=1 00:16:24.489 --rc geninfo_unexecuted_blocks=1 00:16:24.489 00:16:24.489 ' 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.489 --rc genhtml_branch_coverage=1 00:16:24.489 --rc genhtml_function_coverage=1 00:16:24.489 --rc genhtml_legend=1 00:16:24.489 --rc geninfo_all_blocks=1 00:16:24.489 --rc geninfo_unexecuted_blocks=1 00:16:24.489 00:16:24.489 ' 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.489 --rc genhtml_branch_coverage=1 00:16:24.489 --rc genhtml_function_coverage=1 00:16:24.489 --rc genhtml_legend=1 00:16:24.489 --rc geninfo_all_blocks=1 00:16:24.489 --rc geninfo_unexecuted_blocks=1 00:16:24.489 00:16:24.489 ' 00:16:24.489 09:13:19 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:16:24.489 09:13:19 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:24.489 09:13:19 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:24.489 09:13:19 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:24.490 09:13:19 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:24.490 09:13:19 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:24.490 #define SPDK_CONFIG_H 00:16:24.490 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:24.490 #define SPDK_CONFIG_APPS 1 00:16:24.490 #define SPDK_CONFIG_ARCH native 00:16:24.490 #define SPDK_CONFIG_ASAN 1 00:16:24.490 #undef SPDK_CONFIG_AVAHI 00:16:24.490 #undef SPDK_CONFIG_CET 00:16:24.490 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:24.490 #define SPDK_CONFIG_COVERAGE 1 00:16:24.490 #define SPDK_CONFIG_CROSS_PREFIX 00:16:24.490 #undef SPDK_CONFIG_CRYPTO 00:16:24.490 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:24.490 #undef SPDK_CONFIG_CUSTOMOCF 00:16:24.490 #undef SPDK_CONFIG_DAOS 00:16:24.490 #define SPDK_CONFIG_DAOS_DIR 00:16:24.490 #define SPDK_CONFIG_DEBUG 1 00:16:24.490 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:24.490 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:24.490 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:24.490 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:24.490 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:24.490 #undef SPDK_CONFIG_DPDK_UADK 00:16:24.490 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:24.490 #define SPDK_CONFIG_EXAMPLES 1 00:16:24.490 #undef SPDK_CONFIG_FC 00:16:24.490 #define SPDK_CONFIG_FC_PATH 00:16:24.490 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:24.490 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:24.490 #define SPDK_CONFIG_FSDEV 1 00:16:24.490 #undef SPDK_CONFIG_FUSE 00:16:24.490 #undef SPDK_CONFIG_FUZZER 00:16:24.490 #define SPDK_CONFIG_FUZZER_LIB 00:16:24.490 #undef SPDK_CONFIG_GOLANG 00:16:24.490 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:24.490 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:24.490 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:24.490 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:24.490 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:24.490 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:24.490 #undef SPDK_CONFIG_HAVE_LZ4 00:16:24.490 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:24.490 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:24.490 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:24.490 #define SPDK_CONFIG_IDXD 1 00:16:24.490 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:24.490 #undef SPDK_CONFIG_IPSEC_MB 00:16:24.490 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:24.490 #define SPDK_CONFIG_ISAL 1 00:16:24.490 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:24.490 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:24.490 #define SPDK_CONFIG_LIBDIR 00:16:24.490 #undef SPDK_CONFIG_LTO 00:16:24.490 #define SPDK_CONFIG_MAX_LCORES 128 00:16:24.490 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:24.490 #define SPDK_CONFIG_NVME_CUSE 1 00:16:24.490 #undef SPDK_CONFIG_OCF 00:16:24.490 #define SPDK_CONFIG_OCF_PATH 00:16:24.490 #define SPDK_CONFIG_OPENSSL_PATH 00:16:24.490 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:24.490 #define SPDK_CONFIG_PGO_DIR 00:16:24.490 #undef SPDK_CONFIG_PGO_USE 00:16:24.490 #define SPDK_CONFIG_PREFIX /usr/local 00:16:24.490 #undef SPDK_CONFIG_RAID5F 00:16:24.490 #undef SPDK_CONFIG_RBD 00:16:24.490 #define SPDK_CONFIG_RDMA 1 00:16:24.490 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:24.490 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:24.490 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:24.490 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:24.490 #define SPDK_CONFIG_SHARED 1 00:16:24.490 #undef SPDK_CONFIG_SMA 00:16:24.490 #define SPDK_CONFIG_TESTS 1 00:16:24.490 #undef SPDK_CONFIG_TSAN 00:16:24.490 #define SPDK_CONFIG_UBLK 1 00:16:24.490 #define SPDK_CONFIG_UBSAN 1 00:16:24.490 #undef SPDK_CONFIG_UNIT_TESTS 00:16:24.490 #undef SPDK_CONFIG_URING 00:16:24.490 #define SPDK_CONFIG_URING_PATH 00:16:24.490 #undef SPDK_CONFIG_URING_ZNS 00:16:24.490 #undef SPDK_CONFIG_USDT 00:16:24.490 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:24.490 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:24.490 #undef SPDK_CONFIG_VFIO_USER 00:16:24.490 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:24.490 #define SPDK_CONFIG_VHOST 1 00:16:24.490 #define SPDK_CONFIG_VIRTIO 1 00:16:24.490 #undef SPDK_CONFIG_VTUNE 00:16:24.490 #define SPDK_CONFIG_VTUNE_DIR 00:16:24.490 #define SPDK_CONFIG_WERROR 1 00:16:24.490 #define SPDK_CONFIG_WPDK_DIR 00:16:24.490 #define SPDK_CONFIG_XNVME 1 00:16:24.490 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:24.490 09:13:19 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:24.490 09:13:19 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.490 09:13:19 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.490 09:13:19 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.490 09:13:19 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.490 09:13:19 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.490 09:13:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.490 09:13:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.490 09:13:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.490 09:13:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:24.491 09:13:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.491 09:13:19 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:24.491 09:13:19 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@68 -- # uname -s 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:16:24.752 09:13:19 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:24.752 09:13:19 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70218 ]] 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70218 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:16:24.753 09:13:19 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.zq90Xp 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.zq90Xp/tests/xnvme /tmp/spdk.zq90Xp 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13952671744 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5615509504 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13952671744 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5615509504 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96675389440 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3027390464 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:24.754 * Looking for test storage... 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13952671744 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:24.754 09:13:19 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.754 09:13:19 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:16:24.755 09:13:19 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.755 09:13:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.755 --rc genhtml_branch_coverage=1 00:16:24.755 --rc genhtml_function_coverage=1 00:16:24.755 --rc genhtml_legend=1 00:16:24.755 --rc geninfo_all_blocks=1 00:16:24.755 --rc geninfo_unexecuted_blocks=1 00:16:24.755 00:16:24.755 ' 00:16:24.755 09:13:19 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.755 --rc genhtml_branch_coverage=1 00:16:24.755 --rc genhtml_function_coverage=1 00:16:24.755 --rc genhtml_legend=1 00:16:24.755 --rc geninfo_all_blocks=1 00:16:24.755 --rc geninfo_unexecuted_blocks=1 00:16:24.755 00:16:24.755 ' 00:16:24.755 09:13:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.755 --rc genhtml_branch_coverage=1 00:16:24.755 --rc genhtml_function_coverage=1 00:16:24.755 --rc genhtml_legend=1 00:16:24.755 --rc geninfo_all_blocks=1 00:16:24.755 --rc geninfo_unexecuted_blocks=1 00:16:24.755 00:16:24.755 ' 00:16:24.755 09:13:19 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:24.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.755 --rc genhtml_branch_coverage=1 00:16:24.755 --rc genhtml_function_coverage=1 00:16:24.755 --rc genhtml_legend=1 00:16:24.755 --rc geninfo_all_blocks=1 00:16:24.755 --rc geninfo_unexecuted_blocks=1 00:16:24.755 00:16:24.755 ' 00:16:24.755 09:13:19 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.755 09:13:19 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.755 09:13:19 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.755 09:13:19 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.755 09:13:19 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.755 09:13:19 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:16:24.755 09:13:19 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:16:24.755 09:13:19 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:25.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:25.324 Waiting for block devices as requested 00:16:25.324 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.582 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.582 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:25.582 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:30.852 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:30.852 09:13:25 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:16:31.111 09:13:26 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:16:31.111 09:13:26 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:16:31.370 09:13:26 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:31.370 09:13:26 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:31.370 No valid GPT data, bailing 00:16:31.370 09:13:26 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:31.370 09:13:26 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:16:31.370 09:13:26 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:31.370 09:13:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:31.370 09:13:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.370 09:13:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.370 09:13:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.370 ************************************ 00:16:31.370 START TEST xnvme_rpc 00:16:31.370 ************************************ 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70611 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70611 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70611 ']' 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.370 09:13:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.629 [2024-11-20 09:13:26.506510] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:31.629 [2024-11-20 09:13:26.506737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70611 ] 00:16:31.629 [2024-11-20 09:13:26.700099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.888 [2024-11-20 09:13:26.870805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.825 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.825 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:32.825 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.826 xnvme_bdev 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70611 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70611 ']' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70611 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.826 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70611 00:16:33.085 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.085 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.085 killing process with pid 70611 00:16:33.085 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70611' 00:16:33.085 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70611 00:16:33.085 09:13:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70611 00:16:34.991 00:16:34.991 real 0m3.638s 00:16:34.991 user 0m3.700s 00:16:34.991 sys 0m0.651s 00:16:34.991 09:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.991 09:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.991 ************************************ 00:16:34.991 END TEST xnvme_rpc 00:16:34.991 ************************************ 00:16:34.991 09:13:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:34.991 09:13:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:34.991 09:13:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.991 09:13:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.991 ************************************ 00:16:34.991 START TEST xnvme_bdevperf 00:16:34.991 ************************************ 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:34.991 09:13:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:34.991 { 00:16:34.991 "subsystems": [ 00:16:34.991 { 00:16:34.991 "subsystem": "bdev", 00:16:34.991 "config": [ 00:16:34.991 { 00:16:34.991 "params": { 00:16:34.991 "io_mechanism": "libaio", 00:16:34.991 "conserve_cpu": false, 00:16:34.991 "filename": "/dev/nvme0n1", 00:16:34.991 "name": "xnvme_bdev" 00:16:34.991 }, 00:16:34.991 "method": "bdev_xnvme_create" 00:16:34.991 }, 00:16:34.991 { 00:16:34.991 "method": "bdev_wait_for_examine" 00:16:34.991 } 00:16:34.991 ] 00:16:34.991 } 00:16:34.991 ] 00:16:34.991 } 00:16:35.250 [2024-11-20 09:13:30.171360] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:35.250 [2024-11-20 09:13:30.171589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70691 ] 00:16:35.250 [2024-11-20 09:13:30.350775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.509 [2024-11-20 09:13:30.466368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.768 Running I/O for 5 seconds... 00:16:38.080 37440.00 IOPS, 146.25 MiB/s [2024-11-20T09:13:34.138Z] 37220.50 IOPS, 145.39 MiB/s [2024-11-20T09:13:35.074Z] 37107.00 IOPS, 144.95 MiB/s [2024-11-20T09:13:36.010Z] 37320.50 IOPS, 145.78 MiB/s 00:16:40.890 Latency(us) 00:16:40.890 [2024-11-20T09:13:36.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.890 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:40.890 xnvme_bdev : 5.00 36173.00 141.30 0.00 0.00 1765.72 346.30 5272.67 00:16:40.890 [2024-11-20T09:13:36.010Z] =================================================================================================================== 00:16:40.890 [2024-11-20T09:13:36.010Z] Total : 36173.00 141.30 0.00 0.00 1765.72 346.30 5272.67 00:16:41.826 09:13:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:41.826 09:13:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:41.826 09:13:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:41.826 09:13:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:41.826 09:13:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:41.826 { 00:16:41.826 "subsystems": [ 00:16:41.826 { 00:16:41.826 "subsystem": "bdev", 00:16:41.826 "config": [ 00:16:41.826 { 00:16:41.826 "params": { 00:16:41.826 "io_mechanism": "libaio", 00:16:41.826 "conserve_cpu": false, 00:16:41.826 "filename": "/dev/nvme0n1", 00:16:41.826 "name": "xnvme_bdev" 00:16:41.826 }, 00:16:41.826 "method": "bdev_xnvme_create" 00:16:41.826 }, 00:16:41.826 { 00:16:41.826 "method": "bdev_wait_for_examine" 00:16:41.826 } 00:16:41.826 ] 00:16:41.826 } 00:16:41.826 ] 00:16:41.826 } 00:16:41.826 [2024-11-20 09:13:36.932423] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:41.826 [2024-11-20 09:13:36.932626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70766 ] 00:16:42.085 [2024-11-20 09:13:37.113438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.344 [2024-11-20 09:13:37.233549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.603 Running I/O for 5 seconds... 00:16:44.475 29493.00 IOPS, 115.21 MiB/s [2024-11-20T09:13:41.005Z] 32453.00 IOPS, 126.77 MiB/s [2024-11-20T09:13:41.940Z] 32400.67 IOPS, 126.57 MiB/s [2024-11-20T09:13:42.875Z] 33392.25 IOPS, 130.44 MiB/s 00:16:47.755 Latency(us) 00:16:47.755 [2024-11-20T09:13:42.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.755 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:47.755 xnvme_bdev : 5.00 34178.86 133.51 0.00 0.00 1868.38 215.97 5421.61 00:16:47.755 [2024-11-20T09:13:42.875Z] =================================================================================================================== 00:16:47.755 [2024-11-20T09:13:42.875Z] Total : 34178.86 133.51 0.00 0.00 1868.38 215.97 5421.61 00:16:48.692 00:16:48.692 real 0m13.475s 00:16:48.692 user 0m4.661s 00:16:48.692 sys 0m6.834s 00:16:48.692 09:13:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.692 09:13:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:48.692 ************************************ 00:16:48.692 END TEST xnvme_bdevperf 00:16:48.692 ************************************ 00:16:48.692 09:13:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:48.692 09:13:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:48.692 09:13:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.692 09:13:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:48.692 ************************************ 00:16:48.692 START TEST xnvme_fio_plugin 00:16:48.692 ************************************ 00:16:48.692 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:48.693 09:13:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:48.693 { 00:16:48.693 "subsystems": [ 00:16:48.693 { 00:16:48.693 "subsystem": "bdev", 00:16:48.693 "config": [ 00:16:48.693 { 00:16:48.693 "params": { 00:16:48.693 "io_mechanism": "libaio", 00:16:48.693 "conserve_cpu": false, 00:16:48.693 "filename": "/dev/nvme0n1", 00:16:48.693 "name": "xnvme_bdev" 00:16:48.693 }, 00:16:48.693 "method": "bdev_xnvme_create" 00:16:48.693 }, 00:16:48.693 { 00:16:48.693 "method": "bdev_wait_for_examine" 00:16:48.693 } 00:16:48.693 ] 00:16:48.693 } 00:16:48.693 ] 00:16:48.693 } 00:16:48.693 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:48.693 fio-3.35 00:16:48.693 Starting 1 thread 00:16:55.259 00:16:55.259 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70891: Wed Nov 20 09:13:49 2024 00:16:55.259 read: IOPS=30.7k, BW=120MiB/s (126MB/s)(600MiB/5002msec) 00:16:55.259 slat (usec): min=4, max=1286, avg=28.67, stdev=29.20 00:16:55.259 clat (usec): min=112, max=6682, avg=1163.79, stdev=662.56 00:16:55.259 lat (usec): min=171, max=6691, avg=1192.46, stdev=666.28 00:16:55.259 clat percentiles (usec): 00:16:55.259 | 1.00th=[ 233], 5.00th=[ 326], 10.00th=[ 416], 20.00th=[ 586], 00:16:55.259 | 30.00th=[ 742], 40.00th=[ 906], 50.00th=[ 1074], 60.00th=[ 1237], 00:16:55.259 | 70.00th=[ 1418], 80.00th=[ 1631], 90.00th=[ 2008], 95.00th=[ 2376], 00:16:55.259 | 99.00th=[ 3294], 99.50th=[ 3752], 99.90th=[ 4621], 99.95th=[ 4883], 00:16:55.259 | 99.99th=[ 5604] 00:16:55.259 bw ( KiB/s): min=91224, max=136504, per=99.42%, avg=122218.56, stdev=13745.73, samples=9 00:16:55.259 iops : min=22806, max=34126, avg=30554.56, stdev=3436.51, samples=9 00:16:55.259 lat (usec) : 250=1.51%, 500=13.51%, 750=15.31%, 1000=15.40% 00:16:55.259 lat (msec) : 2=44.29%, 4=9.66%, 10=0.33% 00:16:55.259 cpu : usr=24.84%, sys=54.91%, ctx=85, majf=0, minf=764 00:16:55.259 IO depths : 1=0.1%, 2=1.0%, 4=4.4%, 8=11.9%, 16=26.6%, 32=54.3%, >=64=1.7% 00:16:55.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.259 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:16:55.259 issued rwts: total=153718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.259 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:55.259 00:16:55.259 Run status group 0 (all jobs): 00:16:55.259 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=600MiB (630MB), run=5002-5002msec 00:16:55.825 ----------------------------------------------------- 00:16:55.825 Suppressions used: 00:16:55.825 count bytes template 00:16:55.825 1 11 /usr/src/fio/parse.c 00:16:55.825 1 8 libtcmalloc_minimal.so 00:16:55.825 1 904 libcrypto.so 00:16:55.825 ----------------------------------------------------- 00:16:55.825 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:55.825 09:13:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:56.084 { 00:16:56.084 "subsystems": [ 00:16:56.084 { 00:16:56.084 "subsystem": "bdev", 00:16:56.084 "config": [ 00:16:56.084 { 00:16:56.084 "params": { 00:16:56.084 "io_mechanism": "libaio", 00:16:56.084 "conserve_cpu": false, 00:16:56.084 "filename": "/dev/nvme0n1", 00:16:56.084 "name": "xnvme_bdev" 00:16:56.084 }, 00:16:56.084 "method": "bdev_xnvme_create" 00:16:56.084 }, 00:16:56.084 { 00:16:56.084 "method": "bdev_wait_for_examine" 00:16:56.084 } 00:16:56.084 ] 00:16:56.084 } 00:16:56.084 ] 00:16:56.084 } 00:16:56.084 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:56.084 fio-3.35 00:16:56.084 Starting 1 thread 00:17:02.649 00:17:02.649 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70984: Wed Nov 20 09:13:56 2024 00:17:02.649 write: IOPS=25.1k, BW=98.0MiB/s (103MB/s)(490MiB/5001msec); 0 zone resets 00:17:02.649 slat (usec): min=4, max=791, avg=35.39, stdev=32.10 00:17:02.649 clat (usec): min=114, max=6170, avg=1402.20, stdev=806.71 00:17:02.649 lat (usec): min=135, max=6228, avg=1437.59, stdev=811.34 00:17:02.649 clat percentiles (usec): 00:17:02.649 | 1.00th=[ 258], 5.00th=[ 367], 10.00th=[ 474], 20.00th=[ 676], 00:17:02.649 | 30.00th=[ 873], 40.00th=[ 1074], 50.00th=[ 1270], 60.00th=[ 1483], 00:17:02.649 | 70.00th=[ 1729], 80.00th=[ 2040], 90.00th=[ 2507], 95.00th=[ 2933], 00:17:02.649 | 99.00th=[ 3752], 99.50th=[ 4146], 99.90th=[ 4948], 99.95th=[ 5276], 00:17:02.649 | 99.99th=[ 5604] 00:17:02.649 bw ( KiB/s): min=84848, max=119824, per=97.71%, avg=98076.67, stdev=10968.27, samples=9 00:17:02.649 iops : min=21212, max=29956, avg=24519.11, stdev=2742.13, samples=9 00:17:02.649 lat (usec) : 250=0.86%, 500=10.45%, 750=12.40%, 1000=12.62% 00:17:02.649 lat (msec) : 2=42.60%, 4=20.45%, 10=0.63% 00:17:02.649 cpu : usr=25.80%, sys=53.86%, ctx=160, majf=0, minf=764 00:17:02.649 IO depths : 1=0.1%, 2=1.3%, 4=4.9%, 8=11.9%, 16=26.3%, 32=53.8%, >=64=1.7% 00:17:02.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.649 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:02.649 issued rwts: total=0,125494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.649 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.649 00:17:02.649 Run status group 0 (all jobs): 00:17:02.649 WRITE: bw=98.0MiB/s (103MB/s), 98.0MiB/s-98.0MiB/s (103MB/s-103MB/s), io=490MiB (514MB), run=5001-5001msec 00:17:03.215 ----------------------------------------------------- 00:17:03.215 Suppressions used: 00:17:03.215 count bytes template 00:17:03.215 1 11 /usr/src/fio/parse.c 00:17:03.215 1 8 libtcmalloc_minimal.so 00:17:03.215 1 904 libcrypto.so 00:17:03.215 ----------------------------------------------------- 00:17:03.215 00:17:03.215 00:17:03.215 real 0m14.594s 00:17:03.215 user 0m6.018s 00:17:03.215 sys 0m6.209s 00:17:03.215 09:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.215 09:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:03.215 ************************************ 00:17:03.215 END TEST xnvme_fio_plugin 00:17:03.216 ************************************ 00:17:03.216 09:13:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:03.216 09:13:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:03.216 09:13:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:03.216 09:13:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:03.216 09:13:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.216 09:13:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.216 09:13:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:03.216 ************************************ 00:17:03.216 START TEST xnvme_rpc 00:17:03.216 ************************************ 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71069 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71069 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71069 ']' 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.216 09:13:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:03.475 [2024-11-20 09:13:58.366630] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:03.475 [2024-11-20 09:13:58.366841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71069 ] 00:17:03.475 [2024-11-20 09:13:58.550647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.734 [2024-11-20 09:13:58.676513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 xnvme_bdev 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:04.676 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.935 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71069 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71069 ']' 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71069 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71069 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.936 killing process with pid 71069 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71069' 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71069 00:17:04.936 09:13:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71069 00:17:07.471 00:17:07.471 real 0m3.836s 00:17:07.471 user 0m3.889s 00:17:07.471 sys 0m0.673s 00:17:07.471 09:14:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.471 09:14:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 ************************************ 00:17:07.471 END TEST xnvme_rpc 00:17:07.471 ************************************ 00:17:07.471 09:14:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:07.471 09:14:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:07.471 09:14:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.471 09:14:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 ************************************ 00:17:07.471 START TEST xnvme_bdevperf 00:17:07.471 ************************************ 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:07.471 09:14:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:07.471 { 00:17:07.471 "subsystems": [ 00:17:07.471 { 00:17:07.471 "subsystem": "bdev", 00:17:07.471 "config": [ 00:17:07.471 { 00:17:07.472 "params": { 00:17:07.472 "io_mechanism": "libaio", 00:17:07.472 "conserve_cpu": true, 00:17:07.472 "filename": "/dev/nvme0n1", 00:17:07.472 "name": "xnvme_bdev" 00:17:07.472 }, 00:17:07.472 "method": "bdev_xnvme_create" 00:17:07.472 }, 00:17:07.472 { 00:17:07.472 "method": "bdev_wait_for_examine" 00:17:07.472 } 00:17:07.472 ] 00:17:07.472 } 00:17:07.472 ] 00:17:07.472 } 00:17:07.472 [2024-11-20 09:14:02.245930] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:07.472 [2024-11-20 09:14:02.246123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71150 ] 00:17:07.472 [2024-11-20 09:14:02.431317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.472 [2024-11-20 09:14:02.561141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.041 Running I/O for 5 seconds... 00:17:09.916 35765.00 IOPS, 139.71 MiB/s [2024-11-20T09:14:05.972Z] 35463.50 IOPS, 138.53 MiB/s [2024-11-20T09:14:07.348Z] 35223.00 IOPS, 137.59 MiB/s [2024-11-20T09:14:08.306Z] 35049.00 IOPS, 136.91 MiB/s [2024-11-20T09:14:08.306Z] 35136.40 IOPS, 137.25 MiB/s 00:17:13.186 Latency(us) 00:17:13.186 [2024-11-20T09:14:08.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.186 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:13.186 xnvme_bdev : 5.01 35108.19 137.14 0.00 0.00 1819.11 502.69 44564.48 00:17:13.186 [2024-11-20T09:14:08.306Z] =================================================================================================================== 00:17:13.186 [2024-11-20T09:14:08.306Z] Total : 35108.19 137.14 0.00 0.00 1819.11 502.69 44564.48 00:17:14.136 09:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:14.136 09:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:14.136 09:14:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:14.136 09:14:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:14.136 09:14:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:14.136 { 00:17:14.136 "subsystems": [ 00:17:14.136 { 00:17:14.136 "subsystem": "bdev", 00:17:14.136 "config": [ 00:17:14.136 { 00:17:14.136 "params": { 00:17:14.136 "io_mechanism": "libaio", 00:17:14.136 "conserve_cpu": true, 00:17:14.136 "filename": "/dev/nvme0n1", 00:17:14.137 "name": "xnvme_bdev" 00:17:14.137 }, 00:17:14.137 "method": "bdev_xnvme_create" 00:17:14.137 }, 00:17:14.137 { 00:17:14.137 "method": "bdev_wait_for_examine" 00:17:14.137 } 00:17:14.137 ] 00:17:14.137 } 00:17:14.137 ] 00:17:14.137 } 00:17:14.137 [2024-11-20 09:14:09.120931] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:14.137 [2024-11-20 09:14:09.121102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71231 ] 00:17:14.394 [2024-11-20 09:14:09.306501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.394 [2024-11-20 09:14:09.434526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.961 Running I/O for 5 seconds... 00:17:16.831 39490.00 IOPS, 154.26 MiB/s [2024-11-20T09:14:12.887Z] 38324.00 IOPS, 149.70 MiB/s [2024-11-20T09:14:13.823Z] 38068.67 IOPS, 148.71 MiB/s [2024-11-20T09:14:15.201Z] 37998.25 IOPS, 148.43 MiB/s 00:17:20.081 Latency(us) 00:17:20.081 [2024-11-20T09:14:15.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.081 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:20.081 xnvme_bdev : 5.00 37437.96 146.24 0.00 0.00 1705.77 229.00 4081.11 00:17:20.081 [2024-11-20T09:14:15.201Z] =================================================================================================================== 00:17:20.081 [2024-11-20T09:14:15.201Z] Total : 37437.96 146.24 0.00 0.00 1705.77 229.00 4081.11 00:17:21.017 00:17:21.017 real 0m13.759s 00:17:21.017 user 0m4.911s 00:17:21.017 sys 0m6.849s 00:17:21.017 09:14:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.017 09:14:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:21.017 ************************************ 00:17:21.017 END TEST xnvme_bdevperf 00:17:21.017 ************************************ 00:17:21.017 09:14:15 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:21.017 09:14:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:21.017 09:14:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.017 09:14:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:21.017 ************************************ 00:17:21.017 START TEST xnvme_fio_plugin 00:17:21.018 ************************************ 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:21.018 09:14:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:21.018 { 00:17:21.018 "subsystems": [ 00:17:21.018 { 00:17:21.018 "subsystem": "bdev", 00:17:21.018 "config": [ 00:17:21.018 { 00:17:21.018 "params": { 00:17:21.018 "io_mechanism": "libaio", 00:17:21.018 "conserve_cpu": true, 00:17:21.018 "filename": "/dev/nvme0n1", 00:17:21.018 "name": "xnvme_bdev" 00:17:21.018 }, 00:17:21.018 "method": "bdev_xnvme_create" 00:17:21.018 }, 00:17:21.018 { 00:17:21.018 "method": "bdev_wait_for_examine" 00:17:21.018 } 00:17:21.018 ] 00:17:21.018 } 00:17:21.018 ] 00:17:21.018 } 00:17:21.277 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:21.277 fio-3.35 00:17:21.277 Starting 1 thread 00:17:27.855 00:17:27.855 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71353: Wed Nov 20 09:14:22 2024 00:17:27.855 read: IOPS=22.2k, BW=86.9MiB/s (91.1MB/s)(435MiB/5001msec) 00:17:27.855 slat (usec): min=4, max=1306, avg=40.39, stdev=32.61 00:17:27.855 clat (usec): min=63, max=6551, avg=1574.11, stdev=861.77 00:17:27.855 lat (usec): min=142, max=6598, avg=1614.50, stdev=864.41 00:17:27.855 clat percentiles (usec): 00:17:27.855 | 1.00th=[ 265], 5.00th=[ 388], 10.00th=[ 510], 20.00th=[ 750], 00:17:27.855 | 30.00th=[ 996], 40.00th=[ 1221], 50.00th=[ 1467], 60.00th=[ 1745], 00:17:27.855 | 70.00th=[ 2024], 80.00th=[ 2343], 90.00th=[ 2737], 95.00th=[ 3064], 00:17:27.855 | 99.00th=[ 3818], 99.50th=[ 4228], 99.90th=[ 5080], 99.95th=[ 5342], 00:17:27.855 | 99.99th=[ 5932] 00:17:27.855 bw ( KiB/s): min=80680, max=109568, per=100.00%, avg=90197.89, stdev=9921.10, samples=9 00:17:27.855 iops : min=20170, max=27388, avg=22549.22, stdev=2479.30, samples=9 00:17:27.855 lat (usec) : 100=0.01%, 250=0.71%, 500=8.84%, 750=10.42%, 1000=10.36% 00:17:27.855 lat (msec) : 2=38.81%, 4=30.12%, 10=0.74% 00:17:27.855 cpu : usr=23.10%, sys=54.58%, ctx=136, majf=0, minf=740 00:17:27.855 IO depths : 1=0.1%, 2=1.5%, 4=5.4%, 8=12.4%, 16=26.0%, 32=52.9%, >=64=1.7% 00:17:27.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:27.855 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:17:27.855 issued rwts: total=111273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:27.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:27.855 00:17:27.855 Run status group 0 (all jobs): 00:17:27.855 READ: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=435MiB (456MB), run=5001-5001msec 00:17:28.426 ----------------------------------------------------- 00:17:28.426 Suppressions used: 00:17:28.426 count bytes template 00:17:28.426 1 11 /usr/src/fio/parse.c 00:17:28.426 1 8 libtcmalloc_minimal.so 00:17:28.426 1 904 libcrypto.so 00:17:28.426 ----------------------------------------------------- 00:17:28.426 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:28.426 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:28.427 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:28.427 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:28.427 09:14:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:28.427 { 00:17:28.427 "subsystems": [ 00:17:28.427 { 00:17:28.427 "subsystem": "bdev", 00:17:28.427 "config": [ 00:17:28.427 { 00:17:28.427 "params": { 00:17:28.427 "io_mechanism": "libaio", 00:17:28.427 "conserve_cpu": true, 00:17:28.427 "filename": "/dev/nvme0n1", 00:17:28.427 "name": "xnvme_bdev" 00:17:28.427 }, 00:17:28.427 "method": "bdev_xnvme_create" 00:17:28.427 }, 00:17:28.427 { 00:17:28.427 "method": "bdev_wait_for_examine" 00:17:28.427 } 00:17:28.427 ] 00:17:28.427 } 00:17:28.427 ] 00:17:28.427 } 00:17:28.685 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:28.685 fio-3.35 00:17:28.685 Starting 1 thread 00:17:35.245 00:17:35.245 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71449: Wed Nov 20 09:14:29 2024 00:17:35.245 write: IOPS=27.7k, BW=108MiB/s (114MB/s)(542MiB/5001msec); 0 zone resets 00:17:35.245 slat (usec): min=4, max=956, avg=32.05, stdev=35.87 00:17:35.245 clat (usec): min=30, max=5625, avg=1301.19, stdev=714.45 00:17:35.245 lat (usec): min=175, max=5640, avg=1333.24, stdev=717.32 00:17:35.245 clat percentiles (usec): 00:17:35.245 | 1.00th=[ 253], 5.00th=[ 363], 10.00th=[ 461], 20.00th=[ 652], 00:17:35.245 | 30.00th=[ 840], 40.00th=[ 1012], 50.00th=[ 1188], 60.00th=[ 1385], 00:17:35.245 | 70.00th=[ 1598], 80.00th=[ 1893], 90.00th=[ 2311], 95.00th=[ 2638], 00:17:35.245 | 99.00th=[ 3228], 99.50th=[ 3621], 99.90th=[ 4424], 99.95th=[ 4686], 00:17:35.245 | 99.99th=[ 5211] 00:17:35.245 bw ( KiB/s): min=90712, max=161136, per=96.02%, avg=106532.22, stdev=22973.44, samples=9 00:17:35.245 iops : min=22678, max=40284, avg=26633.00, stdev=5743.32, samples=9 00:17:35.245 lat (usec) : 50=0.01%, 100=0.01%, 250=0.94%, 500=11.08%, 750=13.24% 00:17:35.245 lat (usec) : 1000=14.18% 00:17:35.245 lat (msec) : 2=43.52%, 4=16.78%, 10=0.25% 00:17:35.245 cpu : usr=24.02%, sys=56.46%, ctx=78, majf=0, minf=764 00:17:35.245 IO depths : 1=0.1%, 2=1.1%, 4=4.7%, 8=11.8%, 16=26.1%, 32=54.5%, >=64=1.7% 00:17:35.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.245 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:17:35.245 issued rwts: total=0,138709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.245 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:35.245 00:17:35.245 Run status group 0 (all jobs): 00:17:35.245 WRITE: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=542MiB (568MB), run=5001-5001msec 00:17:35.811 ----------------------------------------------------- 00:17:35.811 Suppressions used: 00:17:35.811 count bytes template 00:17:35.811 1 11 /usr/src/fio/parse.c 00:17:35.811 1 8 libtcmalloc_minimal.so 00:17:35.811 1 904 libcrypto.so 00:17:35.811 ----------------------------------------------------- 00:17:35.811 00:17:35.811 00:17:35.811 real 0m14.828s 00:17:35.811 user 0m5.979s 00:17:35.811 sys 0m6.413s 00:17:35.811 09:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.811 09:14:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:35.811 ************************************ 00:17:35.811 END TEST xnvme_fio_plugin 00:17:35.811 ************************************ 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:35.811 09:14:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:35.811 09:14:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.811 09:14:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.811 09:14:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.811 ************************************ 00:17:35.811 START TEST xnvme_rpc 00:17:35.811 ************************************ 00:17:35.811 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71535 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71535 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71535 ']' 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.812 09:14:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.070 [2024-11-20 09:14:30.954295] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:36.070 [2024-11-20 09:14:30.954502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71535 ] 00:17:36.070 [2024-11-20 09:14:31.136094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.329 [2024-11-20 09:14:31.263260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 xnvme_bdev 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71535 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71535 ']' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71535 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.264 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71535 00:17:37.522 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.522 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.522 killing process with pid 71535 00:17:37.522 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71535' 00:17:37.522 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71535 00:17:37.522 09:14:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71535 00:17:39.423 00:17:39.423 real 0m3.646s 00:17:39.423 user 0m3.718s 00:17:39.423 sys 0m0.663s 00:17:39.423 09:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.423 09:14:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.423 ************************************ 00:17:39.423 END TEST xnvme_rpc 00:17:39.423 ************************************ 00:17:39.423 09:14:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:39.423 09:14:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:39.423 09:14:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.423 09:14:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:39.423 ************************************ 00:17:39.423 START TEST xnvme_bdevperf 00:17:39.423 ************************************ 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:39.423 09:14:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:39.692 { 00:17:39.692 "subsystems": [ 00:17:39.692 { 00:17:39.692 "subsystem": "bdev", 00:17:39.692 "config": [ 00:17:39.692 { 00:17:39.692 "params": { 00:17:39.692 "io_mechanism": "io_uring", 00:17:39.692 "conserve_cpu": false, 00:17:39.692 "filename": "/dev/nvme0n1", 00:17:39.692 "name": "xnvme_bdev" 00:17:39.692 }, 00:17:39.692 "method": "bdev_xnvme_create" 00:17:39.692 }, 00:17:39.692 { 00:17:39.692 "method": "bdev_wait_for_examine" 00:17:39.692 } 00:17:39.692 ] 00:17:39.692 } 00:17:39.692 ] 00:17:39.692 } 00:17:39.692 [2024-11-20 09:14:34.642484] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:39.692 [2024-11-20 09:14:34.642678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71615 ] 00:17:39.964 [2024-11-20 09:14:34.826219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.964 [2024-11-20 09:14:34.947671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.223 Running I/O for 5 seconds... 00:17:42.537 46848.00 IOPS, 183.00 MiB/s [2024-11-20T09:14:38.590Z] 47328.00 IOPS, 184.88 MiB/s [2024-11-20T09:14:39.525Z] 48042.67 IOPS, 187.67 MiB/s [2024-11-20T09:14:40.460Z] 48176.00 IOPS, 188.19 MiB/s [2024-11-20T09:14:40.460Z] 48025.60 IOPS, 187.60 MiB/s 00:17:45.340 Latency(us) 00:17:45.340 [2024-11-20T09:14:40.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.340 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:45.340 xnvme_bdev : 5.00 47999.06 187.50 0.00 0.00 1330.30 1057.51 4557.73 00:17:45.340 [2024-11-20T09:14:40.460Z] =================================================================================================================== 00:17:45.340 [2024-11-20T09:14:40.460Z] Total : 47999.06 187.50 0.00 0.00 1330.30 1057.51 4557.73 00:17:46.274 09:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:46.274 09:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:46.274 09:14:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:46.274 09:14:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:46.274 09:14:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:46.274 { 00:17:46.274 "subsystems": [ 00:17:46.274 { 00:17:46.274 "subsystem": "bdev", 00:17:46.274 "config": [ 00:17:46.274 { 00:17:46.274 "params": { 00:17:46.274 "io_mechanism": "io_uring", 00:17:46.274 "conserve_cpu": false, 00:17:46.274 "filename": "/dev/nvme0n1", 00:17:46.274 "name": "xnvme_bdev" 00:17:46.274 }, 00:17:46.274 "method": "bdev_xnvme_create" 00:17:46.274 }, 00:17:46.274 { 00:17:46.274 "method": "bdev_wait_for_examine" 00:17:46.274 } 00:17:46.274 ] 00:17:46.274 } 00:17:46.274 ] 00:17:46.274 } 00:17:46.533 [2024-11-20 09:14:41.408715] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:46.533 [2024-11-20 09:14:41.408907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71690 ] 00:17:46.533 [2024-11-20 09:14:41.588944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.792 [2024-11-20 09:14:41.708855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.052 Running I/O for 5 seconds... 00:17:49.363 43464.00 IOPS, 169.78 MiB/s [2024-11-20T09:14:45.050Z] 43102.00 IOPS, 168.37 MiB/s [2024-11-20T09:14:46.427Z] 43369.33 IOPS, 169.41 MiB/s [2024-11-20T09:14:47.364Z] 44447.00 IOPS, 173.62 MiB/s 00:17:52.244 Latency(us) 00:17:52.244 [2024-11-20T09:14:47.364Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.244 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:52.244 xnvme_bdev : 5.00 45030.93 175.90 0.00 0.00 1417.10 606.95 3961.95 00:17:52.244 [2024-11-20T09:14:47.364Z] =================================================================================================================== 00:17:52.244 [2024-11-20T09:14:47.364Z] Total : 45030.93 175.90 0.00 0.00 1417.10 606.95 3961.95 00:17:53.180 00:17:53.180 real 0m13.487s 00:17:53.180 user 0m5.675s 00:17:53.180 sys 0m7.602s 00:17:53.180 09:14:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.180 09:14:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:53.180 ************************************ 00:17:53.180 END TEST xnvme_bdevperf 00:17:53.180 ************************************ 00:17:53.180 09:14:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:53.180 09:14:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:53.180 09:14:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.181 09:14:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.181 ************************************ 00:17:53.181 START TEST xnvme_fio_plugin 00:17:53.181 ************************************ 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:53.181 09:14:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:53.181 { 00:17:53.181 "subsystems": [ 00:17:53.181 { 00:17:53.181 "subsystem": "bdev", 00:17:53.181 "config": [ 00:17:53.181 { 00:17:53.181 "params": { 00:17:53.181 "io_mechanism": "io_uring", 00:17:53.181 "conserve_cpu": false, 00:17:53.181 "filename": "/dev/nvme0n1", 00:17:53.181 "name": "xnvme_bdev" 00:17:53.181 }, 00:17:53.181 "method": "bdev_xnvme_create" 00:17:53.181 }, 00:17:53.181 { 00:17:53.181 "method": "bdev_wait_for_examine" 00:17:53.181 } 00:17:53.181 ] 00:17:53.181 } 00:17:53.181 ] 00:17:53.181 } 00:17:53.440 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:53.440 fio-3.35 00:17:53.440 Starting 1 thread 00:18:00.012 00:18:00.012 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71814: Wed Nov 20 09:14:54 2024 00:18:00.012 read: IOPS=41.1k, BW=161MiB/s (169MB/s)(804MiB/5001msec) 00:18:00.012 slat (nsec): min=2328, max=91752, avg=4563.56, stdev=2437.30 00:18:00.012 clat (usec): min=933, max=3473, avg=1362.30, stdev=188.37 00:18:00.012 lat (usec): min=936, max=3510, avg=1366.86, stdev=189.13 00:18:00.012 clat percentiles (usec): 00:18:00.012 | 1.00th=[ 1057], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1205], 00:18:00.012 | 30.00th=[ 1254], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1369], 00:18:00.012 | 70.00th=[ 1418], 80.00th=[ 1483], 90.00th=[ 1614], 95.00th=[ 1729], 00:18:00.012 | 99.00th=[ 1942], 99.50th=[ 2008], 99.90th=[ 2212], 99.95th=[ 2868], 00:18:00.012 | 99.99th=[ 3294] 00:18:00.012 bw ( KiB/s): min=159232, max=176640, per=100.00%, avg=165319.11, stdev=5628.77, samples=9 00:18:00.012 iops : min=39808, max=44160, avg=41329.78, stdev=1407.19, samples=9 00:18:00.012 lat (usec) : 1000=0.07% 00:18:00.012 lat (msec) : 2=99.43%, 4=0.50% 00:18:00.012 cpu : usr=34.90%, sys=63.80%, ctx=13, majf=0, minf=762 00:18:00.012 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:00.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.012 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:00.012 issued rwts: total=205760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:00.012 00:18:00.012 Run status group 0 (all jobs): 00:18:00.012 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=804MiB (843MB), run=5001-5001msec 00:18:00.579 ----------------------------------------------------- 00:18:00.579 Suppressions used: 00:18:00.579 count bytes template 00:18:00.580 1 11 /usr/src/fio/parse.c 00:18:00.580 1 8 libtcmalloc_minimal.so 00:18:00.580 1 904 libcrypto.so 00:18:00.580 ----------------------------------------------------- 00:18:00.580 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:00.580 09:14:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.580 { 00:18:00.580 "subsystems": [ 00:18:00.580 { 00:18:00.580 "subsystem": "bdev", 00:18:00.580 "config": [ 00:18:00.580 { 00:18:00.580 "params": { 00:18:00.580 "io_mechanism": "io_uring", 00:18:00.580 "conserve_cpu": false, 00:18:00.580 "filename": "/dev/nvme0n1", 00:18:00.580 "name": "xnvme_bdev" 00:18:00.580 }, 00:18:00.580 "method": "bdev_xnvme_create" 00:18:00.580 }, 00:18:00.580 { 00:18:00.580 "method": "bdev_wait_for_examine" 00:18:00.580 } 00:18:00.580 ] 00:18:00.580 } 00:18:00.580 ] 00:18:00.580 } 00:18:00.838 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:00.838 fio-3.35 00:18:00.838 Starting 1 thread 00:18:07.404 00:18:07.404 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71907: Wed Nov 20 09:15:01 2024 00:18:07.404 write: IOPS=41.1k, BW=161MiB/s (168MB/s)(804MiB/5002msec); 0 zone resets 00:18:07.404 slat (usec): min=2, max=1131, avg= 4.65, stdev= 3.55 00:18:07.404 clat (usec): min=902, max=3614, avg=1367.01, stdev=199.07 00:18:07.404 lat (usec): min=905, max=3628, avg=1371.65, stdev=199.86 00:18:07.404 clat percentiles (usec): 00:18:07.404 | 1.00th=[ 1074], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1205], 00:18:07.404 | 30.00th=[ 1254], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1369], 00:18:07.404 | 70.00th=[ 1418], 80.00th=[ 1483], 90.00th=[ 1631], 95.00th=[ 1762], 00:18:07.404 | 99.00th=[ 1975], 99.50th=[ 2089], 99.90th=[ 2507], 99.95th=[ 3163], 00:18:07.404 | 99.99th=[ 3458] 00:18:07.404 bw ( KiB/s): min=154624, max=179712, per=100.00%, avg=165794.22, stdev=7661.03, samples=9 00:18:07.404 iops : min=38656, max=44928, avg=41448.56, stdev=1915.26, samples=9 00:18:07.404 lat (usec) : 1000=0.05% 00:18:07.404 lat (msec) : 2=99.14%, 4=0.82% 00:18:07.404 cpu : usr=35.09%, sys=63.59%, ctx=13, majf=0, minf=762 00:18:07.404 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:07.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.404 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:07.404 issued rwts: total=0,205756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.404 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.404 00:18:07.404 Run status group 0 (all jobs): 00:18:07.404 WRITE: bw=161MiB/s (168MB/s), 161MiB/s-161MiB/s (168MB/s-168MB/s), io=804MiB (843MB), run=5002-5002msec 00:18:07.972 ----------------------------------------------------- 00:18:07.972 Suppressions used: 00:18:07.972 count bytes template 00:18:07.972 1 11 /usr/src/fio/parse.c 00:18:07.972 1 8 libtcmalloc_minimal.so 00:18:07.972 1 904 libcrypto.so 00:18:07.972 ----------------------------------------------------- 00:18:07.972 00:18:07.972 00:18:07.972 real 0m14.872s 00:18:07.972 user 0m7.204s 00:18:07.972 sys 0m7.261s 00:18:07.972 09:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.972 09:15:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:07.972 ************************************ 00:18:07.972 END TEST xnvme_fio_plugin 00:18:07.972 ************************************ 00:18:07.972 09:15:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:07.972 09:15:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:07.972 09:15:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:07.972 09:15:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:07.972 09:15:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.972 09:15:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.973 09:15:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.973 ************************************ 00:18:07.973 START TEST xnvme_rpc 00:18:07.973 ************************************ 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:07.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71992 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71992 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71992 ']' 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.973 09:15:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:08.231 [2024-11-20 09:15:03.138595] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:08.231 [2024-11-20 09:15:03.138779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71992 ] 00:18:08.231 [2024-11-20 09:15:03.315442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.490 [2024-11-20 09:15:03.433522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 xnvme_bdev 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71992 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71992 ']' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71992 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71992 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.426 killing process with pid 71992 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71992' 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71992 00:18:09.426 09:15:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71992 00:18:11.954 00:18:11.954 real 0m3.539s 00:18:11.954 user 0m3.667s 00:18:11.954 sys 0m0.560s 00:18:11.954 09:15:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.954 09:15:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.954 ************************************ 00:18:11.954 END TEST xnvme_rpc 00:18:11.954 ************************************ 00:18:11.954 09:15:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:11.954 09:15:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:11.954 09:15:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.954 09:15:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:11.954 ************************************ 00:18:11.954 START TEST xnvme_bdevperf 00:18:11.954 ************************************ 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:11.954 09:15:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.954 { 00:18:11.954 "subsystems": [ 00:18:11.954 { 00:18:11.954 "subsystem": "bdev", 00:18:11.954 "config": [ 00:18:11.954 { 00:18:11.954 "params": { 00:18:11.954 "io_mechanism": "io_uring", 00:18:11.954 "conserve_cpu": true, 00:18:11.954 "filename": "/dev/nvme0n1", 00:18:11.954 "name": "xnvme_bdev" 00:18:11.954 }, 00:18:11.954 "method": "bdev_xnvme_create" 00:18:11.954 }, 00:18:11.954 { 00:18:11.954 "method": "bdev_wait_for_examine" 00:18:11.954 } 00:18:11.954 ] 00:18:11.954 } 00:18:11.954 ] 00:18:11.954 } 00:18:11.954 [2024-11-20 09:15:06.700108] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:11.954 [2024-11-20 09:15:06.700289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72072 ] 00:18:11.954 [2024-11-20 09:15:06.874904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.954 [2024-11-20 09:15:06.982460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.276 Running I/O for 5 seconds... 00:18:14.604 46329.00 IOPS, 180.97 MiB/s [2024-11-20T09:15:10.658Z] 47363.00 IOPS, 185.01 MiB/s [2024-11-20T09:15:11.595Z] 48042.67 IOPS, 187.67 MiB/s [2024-11-20T09:15:12.532Z] 47231.50 IOPS, 184.50 MiB/s [2024-11-20T09:15:12.532Z] 47234.80 IOPS, 184.51 MiB/s 00:18:17.412 Latency(us) 00:18:17.412 [2024-11-20T09:15:12.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.412 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:17.412 xnvme_bdev : 5.00 47208.54 184.41 0.00 0.00 1351.73 411.46 7447.27 00:18:17.412 [2024-11-20T09:15:12.532Z] =================================================================================================================== 00:18:17.412 [2024-11-20T09:15:12.532Z] Total : 47208.54 184.41 0.00 0.00 1351.73 411.46 7447.27 00:18:18.350 09:15:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:18.350 09:15:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:18.350 09:15:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:18.350 09:15:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:18.350 09:15:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:18.350 { 00:18:18.350 "subsystems": [ 00:18:18.350 { 00:18:18.350 "subsystem": "bdev", 00:18:18.350 "config": [ 00:18:18.350 { 00:18:18.350 "params": { 00:18:18.350 "io_mechanism": "io_uring", 00:18:18.350 "conserve_cpu": true, 00:18:18.350 "filename": "/dev/nvme0n1", 00:18:18.350 "name": "xnvme_bdev" 00:18:18.350 }, 00:18:18.350 "method": "bdev_xnvme_create" 00:18:18.350 }, 00:18:18.350 { 00:18:18.350 "method": "bdev_wait_for_examine" 00:18:18.350 } 00:18:18.350 ] 00:18:18.350 } 00:18:18.350 ] 00:18:18.350 } 00:18:18.350 [2024-11-20 09:15:13.386677] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:18.350 [2024-11-20 09:15:13.386893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72144 ] 00:18:18.609 [2024-11-20 09:15:13.567402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.609 [2024-11-20 09:15:13.695511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.177 Running I/O for 5 seconds... 00:18:21.051 38195.00 IOPS, 149.20 MiB/s [2024-11-20T09:15:17.107Z] 38347.00 IOPS, 149.79 MiB/s [2024-11-20T09:15:18.043Z] 39024.33 IOPS, 152.44 MiB/s [2024-11-20T09:15:19.419Z] 39172.25 IOPS, 153.02 MiB/s 00:18:24.299 Latency(us) 00:18:24.299 [2024-11-20T09:15:19.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.299 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:24.299 xnvme_bdev : 5.00 39146.18 152.91 0.00 0.00 1628.78 385.40 8043.05 00:18:24.299 [2024-11-20T09:15:19.419Z] =================================================================================================================== 00:18:24.299 [2024-11-20T09:15:19.419Z] Total : 39146.18 152.91 0.00 0.00 1628.78 385.40 8043.05 00:18:24.866 00:18:24.866 real 0m13.373s 00:18:24.866 user 0m6.342s 00:18:24.866 sys 0m5.757s 00:18:24.866 09:15:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.866 09:15:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.866 ************************************ 00:18:24.866 END TEST xnvme_bdevperf 00:18:24.866 ************************************ 00:18:25.125 09:15:20 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:25.125 09:15:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:25.125 09:15:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.125 09:15:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:25.125 ************************************ 00:18:25.125 START TEST xnvme_fio_plugin 00:18:25.125 ************************************ 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.125 09:15:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.125 { 00:18:25.125 "subsystems": [ 00:18:25.125 { 00:18:25.125 "subsystem": "bdev", 00:18:25.125 "config": [ 00:18:25.125 { 00:18:25.125 "params": { 00:18:25.125 "io_mechanism": "io_uring", 00:18:25.125 "conserve_cpu": true, 00:18:25.125 "filename": "/dev/nvme0n1", 00:18:25.125 "name": "xnvme_bdev" 00:18:25.125 }, 00:18:25.125 "method": "bdev_xnvme_create" 00:18:25.125 }, 00:18:25.125 { 00:18:25.125 "method": "bdev_wait_for_examine" 00:18:25.125 } 00:18:25.125 ] 00:18:25.125 } 00:18:25.125 ] 00:18:25.125 } 00:18:25.383 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:25.383 fio-3.35 00:18:25.383 Starting 1 thread 00:18:31.962 00:18:31.962 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72269: Wed Nov 20 09:15:26 2024 00:18:31.962 read: IOPS=45.9k, BW=179MiB/s (188MB/s)(896MiB/5001msec) 00:18:31.962 slat (nsec): min=2321, max=88134, avg=3787.66, stdev=2156.73 00:18:31.962 clat (usec): min=169, max=6288, avg=1240.54, stdev=181.21 00:18:31.962 lat (usec): min=192, max=6301, avg=1244.33, stdev=181.67 00:18:31.962 clat percentiles (usec): 00:18:31.962 | 1.00th=[ 988], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1123], 00:18:31.962 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1254], 00:18:31.962 | 70.00th=[ 1287], 80.00th=[ 1336], 90.00th=[ 1401], 95.00th=[ 1467], 00:18:31.962 | 99.00th=[ 1778], 99.50th=[ 1926], 99.90th=[ 2802], 99.95th=[ 3884], 00:18:31.962 | 99.99th=[ 6194] 00:18:31.962 bw ( KiB/s): min=168648, max=196008, per=99.45%, avg=182480.89, stdev=8518.88, samples=9 00:18:31.962 iops : min=42162, max=49002, avg=45620.44, stdev=2129.63, samples=9 00:18:31.962 lat (usec) : 250=0.01%, 500=0.01%, 750=0.04%, 1000=1.43% 00:18:31.962 lat (msec) : 2=98.12%, 4=0.34%, 10=0.04% 00:18:31.962 cpu : usr=42.70%, sys=52.84%, ctx=10, majf=0, minf=762 00:18:31.962 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:18:31.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.962 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:31.962 issued rwts: total=229416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.963 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.963 00:18:31.963 Run status group 0 (all jobs): 00:18:31.963 READ: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=896MiB (940MB), run=5001-5001msec 00:18:32.221 ----------------------------------------------------- 00:18:32.221 Suppressions used: 00:18:32.221 count bytes template 00:18:32.221 1 11 /usr/src/fio/parse.c 00:18:32.221 1 8 libtcmalloc_minimal.so 00:18:32.221 1 904 libcrypto.so 00:18:32.221 ----------------------------------------------------- 00:18:32.221 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.479 09:15:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:32.479 { 00:18:32.479 "subsystems": [ 00:18:32.479 { 00:18:32.479 "subsystem": "bdev", 00:18:32.479 "config": [ 00:18:32.479 { 00:18:32.479 "params": { 00:18:32.479 "io_mechanism": "io_uring", 00:18:32.479 "conserve_cpu": true, 00:18:32.479 "filename": "/dev/nvme0n1", 00:18:32.479 "name": "xnvme_bdev" 00:18:32.479 }, 00:18:32.479 "method": "bdev_xnvme_create" 00:18:32.480 }, 00:18:32.480 { 00:18:32.480 "method": "bdev_wait_for_examine" 00:18:32.480 } 00:18:32.480 ] 00:18:32.480 } 00:18:32.480 ] 00:18:32.480 } 00:18:32.737 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:32.737 fio-3.35 00:18:32.737 Starting 1 thread 00:18:39.354 00:18:39.354 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72361: Wed Nov 20 09:15:33 2024 00:18:39.354 write: IOPS=39.8k, BW=155MiB/s (163MB/s)(777MiB/5002msec); 0 zone resets 00:18:39.354 slat (nsec): min=2383, max=95429, avg=4909.35, stdev=2790.42 00:18:39.354 clat (usec): min=929, max=6219, avg=1411.14, stdev=222.76 00:18:39.354 lat (usec): min=933, max=6223, avg=1416.05, stdev=223.72 00:18:39.354 clat percentiles (usec): 00:18:39.354 | 1.00th=[ 1074], 5.00th=[ 1139], 10.00th=[ 1172], 20.00th=[ 1237], 00:18:39.354 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1352], 60.00th=[ 1418], 00:18:39.354 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1729], 95.00th=[ 1860], 00:18:39.354 | 99.00th=[ 2073], 99.50th=[ 2147], 99.90th=[ 2442], 99.95th=[ 2606], 00:18:39.354 | 99.99th=[ 3163] 00:18:39.354 bw ( KiB/s): min=146944, max=182272, per=99.70%, avg=158624.11, stdev=12105.18, samples=9 00:18:39.354 iops : min=36736, max=45568, avg=39656.00, stdev=3026.25, samples=9 00:18:39.354 lat (usec) : 1000=0.07% 00:18:39.354 lat (msec) : 2=98.30%, 4=1.63%, 10=0.01% 00:18:39.354 cpu : usr=47.21%, sys=48.39%, ctx=24, majf=0, minf=762 00:18:39.354 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:39.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.354 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:39.354 issued rwts: total=0,198964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:39.354 00:18:39.354 Run status group 0 (all jobs): 00:18:39.354 WRITE: bw=155MiB/s (163MB/s), 155MiB/s-155MiB/s (163MB/s-163MB/s), io=777MiB (815MB), run=5002-5002msec 00:18:39.674 ----------------------------------------------------- 00:18:39.674 Suppressions used: 00:18:39.674 count bytes template 00:18:39.674 1 11 /usr/src/fio/parse.c 00:18:39.674 1 8 libtcmalloc_minimal.so 00:18:39.674 1 904 libcrypto.so 00:18:39.674 ----------------------------------------------------- 00:18:39.674 00:18:39.932 00:18:39.932 real 0m14.766s 00:18:39.932 user 0m8.182s 00:18:39.932 sys 0m5.862s 00:18:39.932 09:15:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.932 09:15:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:39.932 ************************************ 00:18:39.932 END TEST xnvme_fio_plugin 00:18:39.932 ************************************ 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:18:39.932 09:15:34 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:39.932 09:15:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.932 09:15:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.932 09:15:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.932 ************************************ 00:18:39.932 START TEST xnvme_rpc 00:18:39.932 ************************************ 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72453 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72453 00:18:39.932 09:15:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:39.933 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72453 ']' 00:18:39.933 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.933 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.933 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.933 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.933 09:15:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:39.933 [2024-11-20 09:15:34.972451] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:39.933 [2024-11-20 09:15:34.972621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72453 ] 00:18:40.190 [2024-11-20 09:15:35.140625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.190 [2024-11-20 09:15:35.253131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 xnvme_bdev 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:41.123 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.124 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72453 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72453 ']' 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72453 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72453 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.381 killing process with pid 72453 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72453' 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72453 00:18:41.381 09:15:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72453 00:18:43.280 00:18:43.280 real 0m3.403s 00:18:43.280 user 0m3.629s 00:18:43.280 sys 0m0.556s 00:18:43.280 09:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.280 09:15:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:43.280 ************************************ 00:18:43.280 END TEST xnvme_rpc 00:18:43.280 ************************************ 00:18:43.280 09:15:38 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:43.280 09:15:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.280 09:15:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.280 09:15:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:43.280 ************************************ 00:18:43.280 START TEST xnvme_bdevperf 00:18:43.280 ************************************ 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:43.280 09:15:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:43.280 { 00:18:43.280 "subsystems": [ 00:18:43.280 { 00:18:43.280 "subsystem": "bdev", 00:18:43.280 "config": [ 00:18:43.280 { 00:18:43.280 "params": { 00:18:43.280 "io_mechanism": "io_uring_cmd", 00:18:43.280 "conserve_cpu": false, 00:18:43.280 "filename": "/dev/ng0n1", 00:18:43.280 "name": "xnvme_bdev" 00:18:43.280 }, 00:18:43.280 "method": "bdev_xnvme_create" 00:18:43.280 }, 00:18:43.280 { 00:18:43.280 "method": "bdev_wait_for_examine" 00:18:43.280 } 00:18:43.280 ] 00:18:43.280 } 00:18:43.280 ] 00:18:43.280 } 00:18:43.538 [2024-11-20 09:15:38.448693] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:43.538 [2024-11-20 09:15:38.448933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72526 ] 00:18:43.538 [2024-11-20 09:15:38.634084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.797 [2024-11-20 09:15:38.793572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.056 Running I/O for 5 seconds... 00:18:46.364 43280.00 IOPS, 169.06 MiB/s [2024-11-20T09:15:42.419Z] 42375.50 IOPS, 165.53 MiB/s [2024-11-20T09:15:43.352Z] 43614.33 IOPS, 170.37 MiB/s [2024-11-20T09:15:44.286Z] 44130.00 IOPS, 172.38 MiB/s 00:18:49.166 Latency(us) 00:18:49.166 [2024-11-20T09:15:44.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.166 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:49.166 xnvme_bdev : 5.00 44954.99 175.61 0.00 0.00 1419.15 390.98 5868.45 00:18:49.166 [2024-11-20T09:15:44.286Z] =================================================================================================================== 00:18:49.166 [2024-11-20T09:15:44.286Z] Total : 44954.99 175.61 0.00 0.00 1419.15 390.98 5868.45 00:18:50.100 09:15:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:50.100 09:15:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:50.100 09:15:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:50.100 09:15:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:50.100 09:15:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:50.100 { 00:18:50.100 "subsystems": [ 00:18:50.100 { 00:18:50.100 "subsystem": "bdev", 00:18:50.100 "config": [ 00:18:50.100 { 00:18:50.100 "params": { 00:18:50.100 "io_mechanism": "io_uring_cmd", 00:18:50.100 "conserve_cpu": false, 00:18:50.100 "filename": "/dev/ng0n1", 00:18:50.100 "name": "xnvme_bdev" 00:18:50.100 }, 00:18:50.100 "method": "bdev_xnvme_create" 00:18:50.100 }, 00:18:50.100 { 00:18:50.100 "method": "bdev_wait_for_examine" 00:18:50.100 } 00:18:50.100 ] 00:18:50.100 } 00:18:50.100 ] 00:18:50.100 } 00:18:50.358 [2024-11-20 09:15:45.283374] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:50.358 [2024-11-20 09:15:45.283562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72603 ] 00:18:50.358 [2024-11-20 09:15:45.465952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.616 [2024-11-20 09:15:45.572468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.875 Running I/O for 5 seconds... 00:18:52.760 36781.00 IOPS, 143.68 MiB/s [2024-11-20T09:15:49.255Z] 32811.50 IOPS, 128.17 MiB/s [2024-11-20T09:15:50.190Z] 27448.00 IOPS, 107.22 MiB/s [2024-11-20T09:15:51.126Z] 25416.00 IOPS, 99.28 MiB/s [2024-11-20T09:15:51.126Z] 26225.40 IOPS, 102.44 MiB/s 00:18:56.006 Latency(us) 00:18:56.006 [2024-11-20T09:15:51.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.006 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:56.006 xnvme_bdev : 5.00 26221.88 102.43 0.00 0.00 2435.03 63.30 16205.27 00:18:56.006 [2024-11-20T09:15:51.126Z] =================================================================================================================== 00:18:56.006 [2024-11-20T09:15:51.126Z] Total : 26221.88 102.43 0.00 0.00 2435.03 63.30 16205.27 00:18:56.942 09:15:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:56.942 09:15:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:56.942 09:15:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:56.942 09:15:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:56.942 09:15:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:56.942 { 00:18:56.942 "subsystems": [ 00:18:56.942 { 00:18:56.942 "subsystem": "bdev", 00:18:56.942 "config": [ 00:18:56.942 { 00:18:56.942 "params": { 00:18:56.942 "io_mechanism": "io_uring_cmd", 00:18:56.942 "conserve_cpu": false, 00:18:56.942 "filename": "/dev/ng0n1", 00:18:56.942 "name": "xnvme_bdev" 00:18:56.942 }, 00:18:56.942 "method": "bdev_xnvme_create" 00:18:56.942 }, 00:18:56.942 { 00:18:56.942 "method": "bdev_wait_for_examine" 00:18:56.942 } 00:18:56.942 ] 00:18:56.942 } 00:18:56.942 ] 00:18:56.942 } 00:18:56.942 [2024-11-20 09:15:51.979978] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:56.942 [2024-11-20 09:15:51.980274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72678 ] 00:18:57.201 [2024-11-20 09:15:52.170878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.201 [2024-11-20 09:15:52.283301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.768 Running I/O for 5 seconds... 00:18:59.637 80320.00 IOPS, 313.75 MiB/s [2024-11-20T09:15:55.692Z] 79840.00 IOPS, 311.88 MiB/s [2024-11-20T09:15:56.624Z] 80149.33 IOPS, 313.08 MiB/s [2024-11-20T09:15:57.599Z] 79696.00 IOPS, 311.31 MiB/s 00:19:02.479 Latency(us) 00:19:02.479 [2024-11-20T09:15:57.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.479 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:02.479 xnvme_bdev : 5.00 79060.13 308.83 0.00 0.00 806.21 435.67 3604.48 00:19:02.479 [2024-11-20T09:15:57.599Z] =================================================================================================================== 00:19:02.479 [2024-11-20T09:15:57.599Z] Total : 79060.13 308.83 0.00 0.00 806.21 435.67 3604.48 00:19:03.853 09:15:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:03.853 09:15:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:03.853 09:15:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:03.853 09:15:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:03.853 09:15:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:03.853 { 00:19:03.853 "subsystems": [ 00:19:03.853 { 00:19:03.853 "subsystem": "bdev", 00:19:03.853 "config": [ 00:19:03.853 { 00:19:03.853 "params": { 00:19:03.853 "io_mechanism": "io_uring_cmd", 00:19:03.853 "conserve_cpu": false, 00:19:03.853 "filename": "/dev/ng0n1", 00:19:03.853 "name": "xnvme_bdev" 00:19:03.853 }, 00:19:03.853 "method": "bdev_xnvme_create" 00:19:03.853 }, 00:19:03.853 { 00:19:03.853 "method": "bdev_wait_for_examine" 00:19:03.853 } 00:19:03.853 ] 00:19:03.853 } 00:19:03.853 ] 00:19:03.853 } 00:19:03.853 [2024-11-20 09:15:58.817477] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:03.853 [2024-11-20 09:15:58.817677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72761 ] 00:19:04.111 [2024-11-20 09:15:58.999876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.111 [2024-11-20 09:15:59.163891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.675 Running I/O for 5 seconds... 00:19:06.541 45425.00 IOPS, 177.44 MiB/s [2024-11-20T09:16:02.594Z] 47843.50 IOPS, 186.89 MiB/s [2024-11-20T09:16:03.969Z] 48739.00 IOPS, 190.39 MiB/s [2024-11-20T09:16:04.905Z] 49178.00 IOPS, 192.10 MiB/s 00:19:09.785 Latency(us) 00:19:09.785 [2024-11-20T09:16:04.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.785 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:09.785 xnvme_bdev : 5.00 49114.43 191.85 0.00 0.00 1298.56 247.62 11558.17 00:19:09.785 [2024-11-20T09:16:04.905Z] =================================================================================================================== 00:19:09.785 [2024-11-20T09:16:04.905Z] Total : 49114.43 191.85 0.00 0.00 1298.56 247.62 11558.17 00:19:10.723 00:19:10.723 real 0m27.368s 00:19:10.723 user 0m14.497s 00:19:10.723 sys 0m12.439s 00:19:10.723 09:16:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.723 09:16:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:10.723 ************************************ 00:19:10.723 END TEST xnvme_bdevperf 00:19:10.723 ************************************ 00:19:10.723 09:16:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:10.723 09:16:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:10.723 09:16:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.723 09:16:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:10.723 ************************************ 00:19:10.723 START TEST xnvme_fio_plugin 00:19:10.723 ************************************ 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:10.723 09:16:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:10.723 { 00:19:10.723 "subsystems": [ 00:19:10.723 { 00:19:10.723 "subsystem": "bdev", 00:19:10.723 "config": [ 00:19:10.723 { 00:19:10.723 "params": { 00:19:10.723 "io_mechanism": "io_uring_cmd", 00:19:10.723 "conserve_cpu": false, 00:19:10.723 "filename": "/dev/ng0n1", 00:19:10.723 "name": "xnvme_bdev" 00:19:10.723 }, 00:19:10.723 "method": "bdev_xnvme_create" 00:19:10.723 }, 00:19:10.723 { 00:19:10.723 "method": "bdev_wait_for_examine" 00:19:10.723 } 00:19:10.723 ] 00:19:10.723 } 00:19:10.723 ] 00:19:10.723 } 00:19:10.983 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:10.983 fio-3.35 00:19:10.983 Starting 1 thread 00:19:17.562 00:19:17.562 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72890: Wed Nov 20 09:16:11 2024 00:19:17.562 read: IOPS=51.9k, BW=203MiB/s (213MB/s)(1014MiB/5001msec) 00:19:17.562 slat (nsec): min=2335, max=67007, avg=3342.24, stdev=1828.41 00:19:17.562 clat (usec): min=182, max=6073, avg=1096.81, stdev=126.10 00:19:17.562 lat (usec): min=190, max=6076, avg=1100.16, stdev=126.47 00:19:17.562 clat percentiles (usec): 00:19:17.562 | 1.00th=[ 881], 5.00th=[ 930], 10.00th=[ 955], 20.00th=[ 996], 00:19:17.562 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:19:17.562 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1303], 00:19:17.562 | 99.00th=[ 1450], 99.50th=[ 1549], 99.90th=[ 1975], 99.95th=[ 2376], 00:19:17.562 | 99.99th=[ 3163] 00:19:17.562 bw ( KiB/s): min=192000, max=221696, per=100.00%, avg=207808.89, stdev=10468.15, samples=9 00:19:17.562 iops : min=48000, max=55424, avg=51952.22, stdev=2617.04, samples=9 00:19:17.562 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=20.41% 00:19:17.562 lat (msec) : 2=79.49%, 4=0.09%, 10=0.01% 00:19:17.562 cpu : usr=34.38%, sys=64.56%, ctx=9, majf=0, minf=762 00:19:17.562 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:17.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.562 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:17.562 issued rwts: total=259629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.562 00:19:17.562 Run status group 0 (all jobs): 00:19:17.562 READ: bw=203MiB/s (213MB/s), 203MiB/s-203MiB/s (213MB/s-213MB/s), io=1014MiB (1063MB), run=5001-5001msec 00:19:18.128 ----------------------------------------------------- 00:19:18.128 Suppressions used: 00:19:18.128 count bytes template 00:19:18.128 1 11 /usr/src/fio/parse.c 00:19:18.128 1 8 libtcmalloc_minimal.so 00:19:18.128 1 904 libcrypto.so 00:19:18.128 ----------------------------------------------------- 00:19:18.128 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:18.128 09:16:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:18.386 { 00:19:18.386 "subsystems": [ 00:19:18.386 { 00:19:18.386 "subsystem": "bdev", 00:19:18.386 "config": [ 00:19:18.386 { 00:19:18.386 "params": { 00:19:18.386 "io_mechanism": "io_uring_cmd", 00:19:18.386 "conserve_cpu": false, 00:19:18.387 "filename": "/dev/ng0n1", 00:19:18.387 "name": "xnvme_bdev" 00:19:18.387 }, 00:19:18.387 "method": "bdev_xnvme_create" 00:19:18.387 }, 00:19:18.387 { 00:19:18.387 "method": "bdev_wait_for_examine" 00:19:18.387 } 00:19:18.387 ] 00:19:18.387 } 00:19:18.387 ] 00:19:18.387 } 00:19:18.387 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:18.387 fio-3.35 00:19:18.387 Starting 1 thread 00:19:25.194 00:19:25.194 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72982: Wed Nov 20 09:16:19 2024 00:19:25.194 write: IOPS=44.8k, BW=175MiB/s (184MB/s)(875MiB/5001msec); 0 zone resets 00:19:25.194 slat (nsec): min=2341, max=64144, avg=4348.65, stdev=2555.20 00:19:25.194 clat (usec): min=587, max=4238, avg=1253.83, stdev=178.27 00:19:25.194 lat (usec): min=590, max=4247, avg=1258.18, stdev=179.10 00:19:25.194 clat percentiles (usec): 00:19:25.194 | 1.00th=[ 971], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 00:19:25.194 | 30.00th=[ 1156], 40.00th=[ 1188], 50.00th=[ 1221], 60.00th=[ 1270], 00:19:25.194 | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1467], 95.00th=[ 1582], 00:19:25.194 | 99.00th=[ 1811], 99.50th=[ 1893], 99.90th=[ 2180], 99.95th=[ 3064], 00:19:25.194 | 99.99th=[ 4146] 00:19:25.194 bw ( KiB/s): min=169984, max=193520, per=99.25%, avg=177875.56, stdev=7134.99, samples=9 00:19:25.194 iops : min=42496, max=48380, avg=44468.89, stdev=1783.75, samples=9 00:19:25.194 lat (usec) : 750=0.02%, 1000=2.38% 00:19:25.194 lat (msec) : 2=97.36%, 4=0.22%, 10=0.02% 00:19:25.194 cpu : usr=39.24%, sys=59.66%, ctx=13, majf=0, minf=762 00:19:25.194 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:19:25.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.194 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:25.194 issued rwts: total=0,224059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.194 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:25.194 00:19:25.194 Run status group 0 (all jobs): 00:19:25.194 WRITE: bw=175MiB/s (184MB/s), 175MiB/s-175MiB/s (184MB/s-184MB/s), io=875MiB (918MB), run=5001-5001msec 00:19:25.453 ----------------------------------------------------- 00:19:25.453 Suppressions used: 00:19:25.453 count bytes template 00:19:25.453 1 11 /usr/src/fio/parse.c 00:19:25.453 1 8 libtcmalloc_minimal.so 00:19:25.453 1 904 libcrypto.so 00:19:25.453 ----------------------------------------------------- 00:19:25.453 00:19:25.453 00:19:25.453 real 0m14.802s 00:19:25.453 user 0m7.392s 00:19:25.453 sys 0m7.015s 00:19:25.453 09:16:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:25.453 09:16:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:25.453 ************************************ 00:19:25.453 END TEST xnvme_fio_plugin 00:19:25.453 ************************************ 00:19:25.712 09:16:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:19:25.712 09:16:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:19:25.712 09:16:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:19:25.712 09:16:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:19:25.712 09:16:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:25.712 09:16:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:25.712 09:16:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:25.712 ************************************ 00:19:25.712 START TEST xnvme_rpc 00:19:25.712 ************************************ 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73062 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73062 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73062 ']' 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.712 09:16:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:25.712 [2024-11-20 09:16:20.758178] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:25.712 [2024-11-20 09:16:20.758393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73062 ] 00:19:25.970 [2024-11-20 09:16:20.946646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.970 [2024-11-20 09:16:21.078776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:26.905 xnvme_bdev 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:26.905 09:16:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73062 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73062 ']' 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73062 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73062 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.164 killing process with pid 73062 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73062' 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73062 00:19:27.164 09:16:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73062 00:19:29.697 00:19:29.697 real 0m3.942s 00:19:29.697 user 0m4.135s 00:19:29.697 sys 0m0.613s 00:19:29.697 09:16:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.697 09:16:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:29.697 ************************************ 00:19:29.697 END TEST xnvme_rpc 00:19:29.697 ************************************ 00:19:29.697 09:16:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:19:29.697 09:16:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:29.697 09:16:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.697 09:16:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:29.697 ************************************ 00:19:29.697 START TEST xnvme_bdevperf 00:19:29.697 ************************************ 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:29.697 09:16:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:29.697 { 00:19:29.697 "subsystems": [ 00:19:29.697 { 00:19:29.697 "subsystem": "bdev", 00:19:29.697 "config": [ 00:19:29.697 { 00:19:29.697 "params": { 00:19:29.697 "io_mechanism": "io_uring_cmd", 00:19:29.697 "conserve_cpu": true, 00:19:29.697 "filename": "/dev/ng0n1", 00:19:29.697 "name": "xnvme_bdev" 00:19:29.697 }, 00:19:29.697 "method": "bdev_xnvme_create" 00:19:29.697 }, 00:19:29.697 { 00:19:29.697 "method": "bdev_wait_for_examine" 00:19:29.697 } 00:19:29.697 ] 00:19:29.697 } 00:19:29.697 ] 00:19:29.697 } 00:19:29.697 [2024-11-20 09:16:24.725206] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:29.697 [2024-11-20 09:16:24.725385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73147 ] 00:19:29.956 [2024-11-20 09:16:24.912051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.956 [2024-11-20 09:16:25.049926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.524 Running I/O for 5 seconds... 00:19:32.396 45742.00 IOPS, 178.68 MiB/s [2024-11-20T09:16:28.453Z] 46383.00 IOPS, 181.18 MiB/s [2024-11-20T09:16:29.842Z] 46877.33 IOPS, 183.11 MiB/s [2024-11-20T09:16:30.410Z] 46795.75 IOPS, 182.80 MiB/s 00:19:35.290 Latency(us) 00:19:35.290 [2024-11-20T09:16:30.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.290 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:19:35.290 xnvme_bdev : 5.00 47254.22 184.59 0.00 0.00 1350.39 323.96 5004.57 00:19:35.290 [2024-11-20T09:16:30.410Z] =================================================================================================================== 00:19:35.290 [2024-11-20T09:16:30.410Z] Total : 47254.22 184.59 0.00 0.00 1350.39 323.96 5004.57 00:19:36.667 09:16:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:36.667 09:16:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:19:36.667 09:16:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:36.667 09:16:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:36.667 09:16:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:36.667 { 00:19:36.667 "subsystems": [ 00:19:36.667 { 00:19:36.667 "subsystem": "bdev", 00:19:36.667 "config": [ 00:19:36.667 { 00:19:36.667 "params": { 00:19:36.667 "io_mechanism": "io_uring_cmd", 00:19:36.667 "conserve_cpu": true, 00:19:36.667 "filename": "/dev/ng0n1", 00:19:36.667 "name": "xnvme_bdev" 00:19:36.667 }, 00:19:36.667 "method": "bdev_xnvme_create" 00:19:36.667 }, 00:19:36.667 { 00:19:36.667 "method": "bdev_wait_for_examine" 00:19:36.667 } 00:19:36.667 ] 00:19:36.667 } 00:19:36.667 ] 00:19:36.667 } 00:19:36.667 [2024-11-20 09:16:31.517335] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:36.667 [2024-11-20 09:16:31.517500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73224 ] 00:19:36.667 [2024-11-20 09:16:31.698872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.926 [2024-11-20 09:16:31.827520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.185 Running I/O for 5 seconds... 00:19:39.500 43451.00 IOPS, 169.73 MiB/s [2024-11-20T09:16:35.188Z] 44700.00 IOPS, 174.61 MiB/s [2024-11-20T09:16:36.567Z] 45017.67 IOPS, 175.85 MiB/s [2024-11-20T09:16:37.518Z] 45394.00 IOPS, 177.32 MiB/s [2024-11-20T09:16:37.518Z] 45646.40 IOPS, 178.31 MiB/s 00:19:42.398 Latency(us) 00:19:42.398 [2024-11-20T09:16:37.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.398 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:19:42.398 xnvme_bdev : 5.01 45625.41 178.22 0.00 0.00 1398.76 422.63 7089.80 00:19:42.398 [2024-11-20T09:16:37.518Z] =================================================================================================================== 00:19:42.398 [2024-11-20T09:16:37.518Z] Total : 45625.41 178.22 0.00 0.00 1398.76 422.63 7089.80 00:19:43.348 09:16:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:43.348 09:16:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:19:43.348 09:16:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:43.348 09:16:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:43.348 09:16:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:43.348 { 00:19:43.348 "subsystems": [ 00:19:43.348 { 00:19:43.348 "subsystem": "bdev", 00:19:43.348 "config": [ 00:19:43.348 { 00:19:43.348 "params": { 00:19:43.348 "io_mechanism": "io_uring_cmd", 00:19:43.348 "conserve_cpu": true, 00:19:43.348 "filename": "/dev/ng0n1", 00:19:43.348 "name": "xnvme_bdev" 00:19:43.348 }, 00:19:43.348 "method": "bdev_xnvme_create" 00:19:43.348 }, 00:19:43.348 { 00:19:43.348 "method": "bdev_wait_for_examine" 00:19:43.348 } 00:19:43.348 ] 00:19:43.348 } 00:19:43.348 ] 00:19:43.348 } 00:19:43.348 [2024-11-20 09:16:38.461398] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:43.348 [2024-11-20 09:16:38.461601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73298 ] 00:19:43.607 [2024-11-20 09:16:38.662100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.867 [2024-11-20 09:16:38.853181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.434 Running I/O for 5 seconds... 00:19:46.306 72192.00 IOPS, 282.00 MiB/s [2024-11-20T09:16:42.361Z] 76096.00 IOPS, 297.25 MiB/s [2024-11-20T09:16:43.740Z] 77440.00 IOPS, 302.50 MiB/s [2024-11-20T09:16:44.676Z] 77376.00 IOPS, 302.25 MiB/s 00:19:49.556 Latency(us) 00:19:49.556 [2024-11-20T09:16:44.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.556 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:19:49.556 xnvme_bdev : 5.00 77832.02 304.03 0.00 0.00 818.82 463.59 2591.65 00:19:49.556 [2024-11-20T09:16:44.676Z] =================================================================================================================== 00:19:49.556 [2024-11-20T09:16:44.676Z] Total : 77832.02 304.03 0.00 0.00 818.82 463.59 2591.65 00:19:50.493 09:16:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:50.493 09:16:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:19:50.493 09:16:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:19:50.493 09:16:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:19:50.493 09:16:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:50.493 { 00:19:50.493 "subsystems": [ 00:19:50.493 { 00:19:50.493 "subsystem": "bdev", 00:19:50.493 "config": [ 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "io_mechanism": "io_uring_cmd", 00:19:50.493 "conserve_cpu": true, 00:19:50.493 "filename": "/dev/ng0n1", 00:19:50.493 "name": "xnvme_bdev" 00:19:50.493 }, 00:19:50.493 "method": "bdev_xnvme_create" 00:19:50.493 }, 00:19:50.493 { 00:19:50.493 "method": "bdev_wait_for_examine" 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 } 00:19:50.493 ] 00:19:50.493 } 00:19:50.493 [2024-11-20 09:16:45.383508] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:50.493 [2024-11-20 09:16:45.383704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73378 ] 00:19:50.493 [2024-11-20 09:16:45.554958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.752 [2024-11-20 09:16:45.678057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.010 Running I/O for 5 seconds... 00:19:53.319 42802.00 IOPS, 167.20 MiB/s [2024-11-20T09:16:49.372Z] 42343.00 IOPS, 165.40 MiB/s [2024-11-20T09:16:50.303Z] 42025.67 IOPS, 164.16 MiB/s [2024-11-20T09:16:51.236Z] 41781.25 IOPS, 163.21 MiB/s 00:19:56.116 Latency(us) 00:19:56.116 [2024-11-20T09:16:51.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.116 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:19:56.116 xnvme_bdev : 5.00 41510.57 162.15 0.00 0.00 1533.95 102.87 17992.61 00:19:56.116 [2024-11-20T09:16:51.236Z] =================================================================================================================== 00:19:56.116 [2024-11-20T09:16:51.236Z] Total : 41510.57 162.15 0.00 0.00 1533.95 102.87 17992.61 00:19:57.057 00:19:57.057 real 0m27.382s 00:19:57.057 user 0m16.987s 00:19:57.057 sys 0m7.729s 00:19:57.057 09:16:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.057 09:16:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:57.057 ************************************ 00:19:57.057 END TEST xnvme_bdevperf 00:19:57.057 ************************************ 00:19:57.057 09:16:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:57.057 09:16:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:57.057 09:16:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.057 09:16:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:57.057 ************************************ 00:19:57.057 START TEST xnvme_fio_plugin 00:19:57.057 ************************************ 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:57.057 09:16:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:57.057 { 00:19:57.057 "subsystems": [ 00:19:57.057 { 00:19:57.057 "subsystem": "bdev", 00:19:57.057 "config": [ 00:19:57.057 { 00:19:57.057 "params": { 00:19:57.057 "io_mechanism": "io_uring_cmd", 00:19:57.057 "conserve_cpu": true, 00:19:57.057 "filename": "/dev/ng0n1", 00:19:57.057 "name": "xnvme_bdev" 00:19:57.057 }, 00:19:57.057 "method": "bdev_xnvme_create" 00:19:57.057 }, 00:19:57.057 { 00:19:57.057 "method": "bdev_wait_for_examine" 00:19:57.057 } 00:19:57.057 ] 00:19:57.057 } 00:19:57.057 ] 00:19:57.057 } 00:19:57.340 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:57.340 fio-3.35 00:19:57.340 Starting 1 thread 00:20:03.912 00:20:03.912 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73501: Wed Nov 20 09:16:58 2024 00:20:03.912 read: IOPS=49.2k, BW=192MiB/s (201MB/s)(960MiB/5001msec) 00:20:03.912 slat (nsec): min=2282, max=63215, avg=2755.44, stdev=1519.88 00:20:03.912 clat (usec): min=723, max=2906, avg=1189.79, stdev=102.58 00:20:03.912 lat (usec): min=725, max=2969, avg=1192.55, stdev=102.83 00:20:03.912 clat percentiles (usec): 00:20:03.912 | 1.00th=[ 1004], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1106], 00:20:03.912 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1205], 00:20:03.912 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1352], 00:20:03.912 | 99.00th=[ 1532], 99.50th=[ 1631], 99.90th=[ 1778], 99.95th=[ 1844], 00:20:03.912 | 99.99th=[ 2671] 00:20:03.912 bw ( KiB/s): min=187936, max=201216, per=99.90%, avg=196453.67, stdev=4243.19, samples=9 00:20:03.912 iops : min=46984, max=50304, avg=49113.33, stdev=1060.76, samples=9 00:20:03.912 lat (usec) : 750=0.01%, 1000=0.91% 00:20:03.912 lat (msec) : 2=99.05%, 4=0.03% 00:20:03.912 cpu : usr=57.14%, sys=38.82%, ctx=9, majf=0, minf=762 00:20:03.912 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:20:03.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.912 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:20:03.912 issued rwts: total=245865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.912 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:03.912 00:20:03.912 Run status group 0 (all jobs): 00:20:03.912 READ: bw=192MiB/s (201MB/s), 192MiB/s-192MiB/s (201MB/s-201MB/s), io=960MiB (1007MB), run=5001-5001msec 00:20:04.478 ----------------------------------------------------- 00:20:04.478 Suppressions used: 00:20:04.478 count bytes template 00:20:04.478 1 11 /usr/src/fio/parse.c 00:20:04.478 1 8 libtcmalloc_minimal.so 00:20:04.478 1 904 libcrypto.so 00:20:04.478 ----------------------------------------------------- 00:20:04.478 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:04.478 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:04.479 09:16:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:20:04.479 { 00:20:04.479 "subsystems": [ 00:20:04.479 { 00:20:04.479 "subsystem": "bdev", 00:20:04.479 "config": [ 00:20:04.479 { 00:20:04.479 "params": { 00:20:04.479 "io_mechanism": "io_uring_cmd", 00:20:04.479 "conserve_cpu": true, 00:20:04.479 "filename": "/dev/ng0n1", 00:20:04.479 "name": "xnvme_bdev" 00:20:04.479 }, 00:20:04.479 "method": "bdev_xnvme_create" 00:20:04.479 }, 00:20:04.479 { 00:20:04.479 "method": "bdev_wait_for_examine" 00:20:04.479 } 00:20:04.479 ] 00:20:04.479 } 00:20:04.479 ] 00:20:04.479 } 00:20:04.737 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:20:04.737 fio-3.35 00:20:04.737 Starting 1 thread 00:20:11.301 00:20:11.301 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73593: Wed Nov 20 09:17:05 2024 00:20:11.301 write: IOPS=44.8k, BW=175MiB/s (183MB/s)(875MiB/5003msec); 0 zone resets 00:20:11.301 slat (usec): min=2, max=253, avg= 4.14, stdev= 4.46 00:20:11.301 clat (usec): min=70, max=9654, avg=1283.31, stdev=634.52 00:20:11.301 lat (usec): min=74, max=9660, avg=1287.46, stdev=634.83 00:20:11.301 clat percentiles (usec): 00:20:11.301 | 1.00th=[ 363], 5.00th=[ 783], 10.00th=[ 1020], 20.00th=[ 1090], 00:20:11.301 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1221], 00:20:11.301 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1385], 95.00th=[ 1762], 00:20:11.301 | 99.00th=[ 4817], 99.50th=[ 5342], 99.90th=[ 6259], 99.95th=[ 6456], 00:20:11.301 | 99.99th=[ 6980] 00:20:11.301 bw ( KiB/s): min=146344, max=195584, per=100.00%, avg=183795.56, stdev=15145.59, samples=9 00:20:11.301 iops : min=36586, max=48896, avg=45948.89, stdev=3786.40, samples=9 00:20:11.301 lat (usec) : 100=0.01%, 250=0.39%, 500=1.72%, 750=2.50%, 1000=4.29% 00:20:11.301 lat (msec) : 2=86.95%, 4=2.15%, 10=1.99% 00:20:11.301 cpu : usr=59.00%, sys=33.49%, ctx=17, majf=0, minf=762 00:20:11.301 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.7%, 16=22.5%, 32=55.2%, >=64=2.5% 00:20:11.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.301 complete : 0=0.0%, 4=98.0%, 8=0.2%, 16=0.2%, 32=0.2%, 64=1.4%, >=64=0.0% 00:20:11.301 issued rwts: total=0,223906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.301 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:11.301 00:20:11.301 Run status group 0 (all jobs): 00:20:11.301 WRITE: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=875MiB (917MB), run=5003-5003msec 00:20:11.868 ----------------------------------------------------- 00:20:11.868 Suppressions used: 00:20:11.868 count bytes template 00:20:11.868 1 11 /usr/src/fio/parse.c 00:20:11.868 1 8 libtcmalloc_minimal.so 00:20:11.868 1 904 libcrypto.so 00:20:11.868 ----------------------------------------------------- 00:20:11.868 00:20:11.868 00:20:11.868 real 0m14.700s 00:20:11.868 user 0m9.408s 00:20:11.868 sys 0m4.451s 00:20:11.868 ************************************ 00:20:11.868 END TEST xnvme_fio_plugin 00:20:11.868 ************************************ 00:20:11.868 09:17:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.868 09:17:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 09:17:06 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73062 00:20:11.868 09:17:06 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73062 ']' 00:20:11.868 09:17:06 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73062 00:20:11.868 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73062) - No such process 00:20:11.868 Process with pid 73062 is not found 00:20:11.868 09:17:06 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73062 is not found' 00:20:11.868 00:20:11.868 real 3m47.459s 00:20:11.868 user 2m0.954s 00:20:11.868 sys 1m29.822s 00:20:11.868 ************************************ 00:20:11.868 END TEST nvme_xnvme 00:20:11.868 ************************************ 00:20:11.868 09:17:06 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.868 09:17:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 09:17:06 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:11.868 09:17:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:11.868 09:17:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.868 09:17:06 -- common/autotest_common.sh@10 -- # set +x 00:20:11.868 ************************************ 00:20:11.868 START TEST blockdev_xnvme 00:20:11.868 ************************************ 00:20:11.868 09:17:06 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:20:11.868 * Looking for test storage... 00:20:11.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:11.868 09:17:06 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:11.868 09:17:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:20:11.868 09:17:06 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:12.125 09:17:07 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.125 09:17:07 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.126 09:17:07 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.126 09:17:07 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:12.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.126 --rc genhtml_branch_coverage=1 00:20:12.126 --rc genhtml_function_coverage=1 00:20:12.126 --rc genhtml_legend=1 00:20:12.126 --rc geninfo_all_blocks=1 00:20:12.126 --rc geninfo_unexecuted_blocks=1 00:20:12.126 00:20:12.126 ' 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:12.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.126 --rc genhtml_branch_coverage=1 00:20:12.126 --rc genhtml_function_coverage=1 00:20:12.126 --rc genhtml_legend=1 00:20:12.126 --rc geninfo_all_blocks=1 00:20:12.126 --rc geninfo_unexecuted_blocks=1 00:20:12.126 00:20:12.126 ' 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:12.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.126 --rc genhtml_branch_coverage=1 00:20:12.126 --rc genhtml_function_coverage=1 00:20:12.126 --rc genhtml_legend=1 00:20:12.126 --rc geninfo_all_blocks=1 00:20:12.126 --rc geninfo_unexecuted_blocks=1 00:20:12.126 00:20:12.126 ' 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:12.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.126 --rc genhtml_branch_coverage=1 00:20:12.126 --rc genhtml_function_coverage=1 00:20:12.126 --rc genhtml_legend=1 00:20:12.126 --rc geninfo_all_blocks=1 00:20:12.126 --rc geninfo_unexecuted_blocks=1 00:20:12.126 00:20:12.126 ' 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73722 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:12.126 09:17:07 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73722 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73722 ']' 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.126 09:17:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:12.126 [2024-11-20 09:17:07.217540] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:20:12.126 [2024-11-20 09:17:07.218057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73722 ] 00:20:12.384 [2024-11-20 09:17:07.406362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.643 [2024-11-20 09:17:07.530610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.619 09:17:08 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.619 09:17:08 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:20:13.619 09:17:08 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:20:13.619 09:17:08 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:20:13.619 09:17:08 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:20:13.619 09:17:08 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:20:13.619 09:17:08 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:13.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.440 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:14.440 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:14.440 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:14.440 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.698 09:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.698 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:20:14.698 nvme0n1 00:20:14.698 nvme0n2 00:20:14.698 nvme0n3 00:20:14.698 nvme1n1 00:20:14.698 nvme2n1 00:20:14.698 nvme3n1 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:20:14.699 09:17:09 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:20:14.699 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1c79e1d9-dad6-4207-b537-3d9c4845e7dd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1c79e1d9-dad6-4207-b537-3d9c4845e7dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f7b5ab7e-e08f-441c-811c-7baa38393ea1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f7b5ab7e-e08f-441c-811c-7baa38393ea1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "2f1815be-16ea-4312-816e-087c13555dc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f1815be-16ea-4312-816e-087c13555dc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "40ef16e0-bc33-4f8a-a370-7ef04e2266f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "40ef16e0-bc33-4f8a-a370-7ef04e2266f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f5a5a62e-62cc-4857-9760-9840179c6972"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f5a5a62e-62cc-4857-9760-9840179c6972",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "86f463e2-5118-41f5-b78c-0e417d530744"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "86f463e2-5118-41f5-b78c-0e417d530744",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:14.958 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:20:14.958 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:20:14.958 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:20:14.958 09:17:09 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73722 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73722 ']' 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73722 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73722 00:20:14.958 killing process with pid 73722 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73722' 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73722 00:20:14.958 09:17:09 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73722 00:20:16.860 09:17:11 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:16.860 09:17:11 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:16.860 09:17:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:16.860 09:17:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.860 09:17:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:16.860 ************************************ 00:20:16.860 START TEST bdev_hello_world 00:20:16.860 ************************************ 00:20:16.860 09:17:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:20:17.118 [2024-11-20 09:17:12.082469] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:20:17.118 [2024-11-20 09:17:12.083070] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74012 ] 00:20:17.376 [2024-11-20 09:17:12.270985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.376 [2024-11-20 09:17:12.431505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.944 [2024-11-20 09:17:12.861350] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:17.944 [2024-11-20 09:17:12.861416] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:20:17.944 [2024-11-20 09:17:12.861455] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:17.944 [2024-11-20 09:17:12.864000] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:17.944 [2024-11-20 09:17:12.864442] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:17.944 [2024-11-20 09:17:12.864502] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:17.944 [2024-11-20 09:17:12.864738] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:17.944 00:20:17.944 [2024-11-20 09:17:12.864767] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:18.881 00:20:18.881 real 0m1.891s 00:20:18.881 user 0m1.436s 00:20:18.881 sys 0m0.338s 00:20:18.881 09:17:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.881 09:17:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:18.881 ************************************ 00:20:18.881 END TEST bdev_hello_world 00:20:18.881 ************************************ 00:20:18.881 09:17:13 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:20:18.881 09:17:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:18.881 09:17:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.881 09:17:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:18.881 ************************************ 00:20:18.881 START TEST bdev_bounds 00:20:18.881 ************************************ 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:18.881 Process bdevio pid: 74054 00:20:18.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74054 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74054' 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74054 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74054 ']' 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.881 09:17:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:19.140 [2024-11-20 09:17:14.027944] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:20:19.140 [2024-11-20 09:17:14.028416] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74054 ] 00:20:19.140 [2024-11-20 09:17:14.223572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.399 [2024-11-20 09:17:14.377571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.399 [2024-11-20 09:17:14.377724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.399 [2024-11-20 09:17:14.377749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.967 09:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.967 09:17:14 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:19.967 09:17:14 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:19.967 I/O targets: 00:20:19.967 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:19.967 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:19.967 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:20:19.967 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:20:19.967 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:20:19.967 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:20:19.967 00:20:19.967 00:20:19.967 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.967 http://cunit.sourceforge.net/ 00:20:19.967 00:20:19.967 00:20:19.967 Suite: bdevio tests on: nvme3n1 00:20:19.967 Test: blockdev write read block ...passed 00:20:19.967 Test: blockdev write zeroes read block ...passed 00:20:19.967 Test: blockdev write zeroes read no split ...passed 00:20:19.967 Test: blockdev write zeroes read split ...passed 00:20:20.226 Test: blockdev write zeroes read split partial ...passed 00:20:20.226 Test: blockdev reset ...passed 00:20:20.226 Test: blockdev write read 8 blocks ...passed 00:20:20.226 Test: blockdev write read size > 128k ...passed 00:20:20.226 Test: blockdev write read invalid size ...passed 00:20:20.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.226 Test: blockdev write read max offset ...passed 00:20:20.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.226 Test: blockdev writev readv 8 blocks ...passed 00:20:20.226 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.226 Test: blockdev writev readv block ...passed 00:20:20.226 Test: blockdev writev readv size > 128k ...passed 00:20:20.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.226 Test: blockdev comparev and writev ...passed 00:20:20.226 Test: blockdev nvme passthru rw ...passed 00:20:20.226 Test: blockdev nvme passthru vendor specific ...passed 00:20:20.226 Test: blockdev nvme admin passthru ...passed 00:20:20.226 Test: blockdev copy ...passed 00:20:20.226 Suite: bdevio tests on: nvme2n1 00:20:20.226 Test: blockdev write read block ...passed 00:20:20.226 Test: blockdev write zeroes read block ...passed 00:20:20.226 Test: blockdev write zeroes read no split ...passed 00:20:20.226 Test: blockdev write zeroes read split ...passed 00:20:20.226 Test: blockdev write zeroes read split partial ...passed 00:20:20.226 Test: blockdev reset ...passed 00:20:20.226 Test: blockdev write read 8 blocks ...passed 00:20:20.226 Test: blockdev write read size > 128k ...passed 00:20:20.226 Test: blockdev write read invalid size ...passed 00:20:20.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.226 Test: blockdev write read max offset ...passed 00:20:20.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.226 Test: blockdev writev readv 8 blocks ...passed 00:20:20.226 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.226 Test: blockdev writev readv block ...passed 00:20:20.226 Test: blockdev writev readv size > 128k ...passed 00:20:20.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.226 Test: blockdev comparev and writev ...passed 00:20:20.226 Test: blockdev nvme passthru rw ...passed 00:20:20.226 Test: blockdev nvme passthru vendor specific ...passed 00:20:20.226 Test: blockdev nvme admin passthru ...passed 00:20:20.226 Test: blockdev copy ...passed 00:20:20.226 Suite: bdevio tests on: nvme1n1 00:20:20.226 Test: blockdev write read block ...passed 00:20:20.226 Test: blockdev write zeroes read block ...passed 00:20:20.226 Test: blockdev write zeroes read no split ...passed 00:20:20.226 Test: blockdev write zeroes read split ...passed 00:20:20.226 Test: blockdev write zeroes read split partial ...passed 00:20:20.226 Test: blockdev reset ...passed 00:20:20.226 Test: blockdev write read 8 blocks ...passed 00:20:20.226 Test: blockdev write read size > 128k ...passed 00:20:20.226 Test: blockdev write read invalid size ...passed 00:20:20.226 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.226 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.226 Test: blockdev write read max offset ...passed 00:20:20.226 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.226 Test: blockdev writev readv 8 blocks ...passed 00:20:20.226 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.226 Test: blockdev writev readv block ...passed 00:20:20.226 Test: blockdev writev readv size > 128k ...passed 00:20:20.226 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.226 Test: blockdev comparev and writev ...passed 00:20:20.226 Test: blockdev nvme passthru rw ...passed 00:20:20.226 Test: blockdev nvme passthru vendor specific ...passed 00:20:20.226 Test: blockdev nvme admin passthru ...passed 00:20:20.226 Test: blockdev copy ...passed 00:20:20.226 Suite: bdevio tests on: nvme0n3 00:20:20.226 Test: blockdev write read block ...passed 00:20:20.226 Test: blockdev write zeroes read block ...passed 00:20:20.226 Test: blockdev write zeroes read no split ...passed 00:20:20.226 Test: blockdev write zeroes read split ...passed 00:20:20.486 Test: blockdev write zeroes read split partial ...passed 00:20:20.487 Test: blockdev reset ...passed 00:20:20.487 Test: blockdev write read 8 blocks ...passed 00:20:20.487 Test: blockdev write read size > 128k ...passed 00:20:20.487 Test: blockdev write read invalid size ...passed 00:20:20.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.487 Test: blockdev write read max offset ...passed 00:20:20.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.487 Test: blockdev writev readv 8 blocks ...passed 00:20:20.487 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.487 Test: blockdev writev readv block ...passed 00:20:20.487 Test: blockdev writev readv size > 128k ...passed 00:20:20.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.487 Test: blockdev comparev and writev ...passed 00:20:20.487 Test: blockdev nvme passthru rw ...passed 00:20:20.487 Test: blockdev nvme passthru vendor specific ...passed 00:20:20.487 Test: blockdev nvme admin passthru ...passed 00:20:20.487 Test: blockdev copy ...passed 00:20:20.487 Suite: bdevio tests on: nvme0n2 00:20:20.487 Test: blockdev write read block ...passed 00:20:20.487 Test: blockdev write zeroes read block ...passed 00:20:20.487 Test: blockdev write zeroes read no split ...passed 00:20:20.487 Test: blockdev write zeroes read split ...passed 00:20:20.487 Test: blockdev write zeroes read split partial ...passed 00:20:20.487 Test: blockdev reset ...passed 00:20:20.487 Test: blockdev write read 8 blocks ...passed 00:20:20.487 Test: blockdev write read size > 128k ...passed 00:20:20.487 Test: blockdev write read invalid size ...passed 00:20:20.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.487 Test: blockdev write read max offset ...passed 00:20:20.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.487 Test: blockdev writev readv 8 blocks ...passed 00:20:20.487 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.487 Test: blockdev writev readv block ...passed 00:20:20.487 Test: blockdev writev readv size > 128k ...passed 00:20:20.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.487 Test: blockdev comparev and writev ...passed 00:20:20.487 Test: blockdev nvme passthru rw ...passed 00:20:20.487 Test: blockdev nvme passthru vendor specific ...passed 00:20:20.487 Test: blockdev nvme admin passthru ...passed 00:20:20.487 Test: blockdev copy ...passed 00:20:20.487 Suite: bdevio tests on: nvme0n1 00:20:20.487 Test: blockdev write read block ...passed 00:20:20.487 Test: blockdev write zeroes read block ...passed 00:20:20.487 Test: blockdev write zeroes read no split ...passed 00:20:20.487 Test: blockdev write zeroes read split ...passed 00:20:20.487 Test: blockdev write zeroes read split partial ...passed 00:20:20.487 Test: blockdev reset ...passed 00:20:20.487 Test: blockdev write read 8 blocks ...passed 00:20:20.487 Test: blockdev write read size > 128k ...passed 00:20:20.487 Test: blockdev write read invalid size ...passed 00:20:20.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.487 Test: blockdev write read max offset ...passed 00:20:20.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.487 Test: blockdev writev readv 8 blocks ...passed 00:20:20.487 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.487 Test: blockdev writev readv block ...passed 00:20:20.487 Test: blockdev writev readv size > 128k ...passed 00:20:20.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.487 Test: blockdev comparev and writev ...passed 00:20:20.487 Test: blockdev nvme passthru rw ...passed 00:20:20.487 Test: blockdev nvme passthru vendor specific ...passed 00:20:20.487 Test: blockdev nvme admin passthru ...passed 00:20:20.487 Test: blockdev copy ...passed 00:20:20.487 00:20:20.487 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.487 suites 6 6 n/a 0 0 00:20:20.487 tests 138 138 138 0 0 00:20:20.487 asserts 780 780 780 0 n/a 00:20:20.487 00:20:20.487 Elapsed time = 1.285 seconds 00:20:20.487 0 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74054 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74054 ']' 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74054 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74054 00:20:20.487 killing process with pid 74054 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74054' 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74054 00:20:20.487 09:17:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74054 00:20:21.867 09:17:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:21.867 00:20:21.867 real 0m2.650s 00:20:21.867 user 0m6.402s 00:20:21.867 sys 0m0.494s 00:20:21.867 09:17:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.867 09:17:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:21.867 ************************************ 00:20:21.867 END TEST bdev_bounds 00:20:21.867 ************************************ 00:20:21.867 09:17:16 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:21.867 09:17:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:21.867 09:17:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.867 09:17:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:21.867 ************************************ 00:20:21.867 START TEST bdev_nbd 00:20:21.867 ************************************ 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74118 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74118 /var/tmp/spdk-nbd.sock 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74118 ']' 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.867 09:17:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:21.867 [2024-11-20 09:17:16.742926] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:20:21.867 [2024-11-20 09:17:16.743352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.867 [2024-11-20 09:17:16.929513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.127 [2024-11-20 09:17:17.054468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:22.694 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.952 1+0 records in 00:20:22.952 1+0 records out 00:20:22.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465107 s, 8.8 MB/s 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:22.952 09:17:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.211 1+0 records in 00:20:23.211 1+0 records out 00:20:23.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603373 s, 6.8 MB/s 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.211 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.470 1+0 records in 00:20:23.470 1+0 records out 00:20:23.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578345 s, 7.1 MB/s 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.470 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.729 1+0 records in 00:20:23.729 1+0 records out 00:20:23.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071763 s, 5.7 MB/s 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.729 09:17:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:23.987 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.988 1+0 records in 00:20:23.988 1+0 records out 00:20:23.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104053 s, 3.9 MB/s 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:23.988 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:24.246 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:24.247 1+0 records in 00:20:24.247 1+0 records out 00:20:24.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00297138 s, 1.4 MB/s 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:20:24.247 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd0", 00:20:24.506 "bdev_name": "nvme0n1" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd1", 00:20:24.506 "bdev_name": "nvme0n2" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd2", 00:20:24.506 "bdev_name": "nvme0n3" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd3", 00:20:24.506 "bdev_name": "nvme1n1" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd4", 00:20:24.506 "bdev_name": "nvme2n1" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd5", 00:20:24.506 "bdev_name": "nvme3n1" 00:20:24.506 } 00:20:24.506 ]' 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd0", 00:20:24.506 "bdev_name": "nvme0n1" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd1", 00:20:24.506 "bdev_name": "nvme0n2" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd2", 00:20:24.506 "bdev_name": "nvme0n3" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd3", 00:20:24.506 "bdev_name": "nvme1n1" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd4", 00:20:24.506 "bdev_name": "nvme2n1" 00:20:24.506 }, 00:20:24.506 { 00:20:24.506 "nbd_device": "/dev/nbd5", 00:20:24.506 "bdev_name": "nvme3n1" 00:20:24.506 } 00:20:24.506 ]' 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.506 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.073 09:17:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:25.073 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.331 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.332 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:20:25.332 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.332 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.332 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.332 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:20:25.898 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:25.899 09:17:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.157 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:26.415 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:20:26.674 /dev/nbd0 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.674 1+0 records in 00:20:26.674 1+0 records out 00:20:26.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516343 s, 7.9 MB/s 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:26.674 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.946 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:26.946 09:17:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:26.946 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.946 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.946 09:17:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:20:26.946 /dev/nbd1 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:26.946 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.947 1+0 records in 00:20:26.947 1+0 records out 00:20:26.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681791 s, 6.0 MB/s 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:26.947 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:20:27.265 /dev/nbd10 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.265 1+0 records in 00:20:27.265 1+0 records out 00:20:27.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679608 s, 6.0 MB/s 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:27.265 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:20:27.524 /dev/nbd11 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:27.524 1+0 records in 00:20:27.524 1+0 records out 00:20:27.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722965 s, 5.7 MB/s 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:27.524 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:20:27.783 /dev/nbd12 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:27.783 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.042 1+0 records in 00:20:28.042 1+0 records out 00:20:28.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880834 s, 4.7 MB/s 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:28.042 09:17:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:20:28.042 /dev/nbd13 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:28.042 1+0 records in 00:20:28.042 1+0 records out 00:20:28.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870761 s, 4.7 MB/s 00:20:28.042 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:28.301 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd0", 00:20:28.560 "bdev_name": "nvme0n1" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd1", 00:20:28.560 "bdev_name": "nvme0n2" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd10", 00:20:28.560 "bdev_name": "nvme0n3" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd11", 00:20:28.560 "bdev_name": "nvme1n1" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd12", 00:20:28.560 "bdev_name": "nvme2n1" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd13", 00:20:28.560 "bdev_name": "nvme3n1" 00:20:28.560 } 00:20:28.560 ]' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd0", 00:20:28.560 "bdev_name": "nvme0n1" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd1", 00:20:28.560 "bdev_name": "nvme0n2" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd10", 00:20:28.560 "bdev_name": "nvme0n3" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd11", 00:20:28.560 "bdev_name": "nvme1n1" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd12", 00:20:28.560 "bdev_name": "nvme2n1" 00:20:28.560 }, 00:20:28.560 { 00:20:28.560 "nbd_device": "/dev/nbd13", 00:20:28.560 "bdev_name": "nvme3n1" 00:20:28.560 } 00:20:28.560 ]' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:28.560 /dev/nbd1 00:20:28.560 /dev/nbd10 00:20:28.560 /dev/nbd11 00:20:28.560 /dev/nbd12 00:20:28.560 /dev/nbd13' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:28.560 /dev/nbd1 00:20:28.560 /dev/nbd10 00:20:28.560 /dev/nbd11 00:20:28.560 /dev/nbd12 00:20:28.560 /dev/nbd13' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:28.560 256+0 records in 00:20:28.560 256+0 records out 00:20:28.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00686542 s, 153 MB/s 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:28.560 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:28.819 256+0 records in 00:20:28.819 256+0 records out 00:20:28.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158457 s, 6.6 MB/s 00:20:28.819 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:28.819 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:28.819 256+0 records in 00:20:28.819 256+0 records out 00:20:28.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176338 s, 5.9 MB/s 00:20:28.819 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:28.819 09:17:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:29.078 256+0 records in 00:20:29.078 256+0 records out 00:20:29.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169144 s, 6.2 MB/s 00:20:29.078 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:29.078 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:29.337 256+0 records in 00:20:29.337 256+0 records out 00:20:29.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173228 s, 6.1 MB/s 00:20:29.337 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:29.337 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:29.337 256+0 records in 00:20:29.337 256+0 records out 00:20:29.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189789 s, 5.5 MB/s 00:20:29.337 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:29.337 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:29.595 256+0 records in 00:20:29.595 256+0 records out 00:20:29.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169748 s, 6.2 MB/s 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:29.595 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:29.596 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:29.854 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.114 09:17:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:30.373 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.374 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:30.632 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:30.633 09:17:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:30.891 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.150 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:31.409 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:31.668 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:31.927 malloc_lvol_verify 00:20:31.927 09:17:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:32.186 40ab5048-5774-4a0c-aba5-274f44096e54 00:20:32.186 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:32.445 460c2f11-967d-4130-915f-3d5ce0ac78a6 00:20:32.445 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:32.703 /dev/nbd0 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:32.703 mke2fs 1.47.0 (5-Feb-2023) 00:20:32.703 Discarding device blocks: 0/4096 done 00:20:32.703 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:32.703 00:20:32.703 Allocating group tables: 0/1 done 00:20:32.703 Writing inode tables: 0/1 done 00:20:32.703 Creating journal (1024 blocks): done 00:20:32.703 Writing superblocks and filesystem accounting information: 0/1 done 00:20:32.703 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:32.703 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74118 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74118 ']' 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74118 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74118 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.962 killing process with pid 74118 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74118' 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74118 00:20:32.962 09:17:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74118 00:20:33.898 09:17:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:33.898 00:20:33.898 real 0m12.274s 00:20:33.898 user 0m17.022s 00:20:33.898 sys 0m4.165s 00:20:33.898 09:17:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.898 09:17:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:33.898 ************************************ 00:20:33.898 END TEST bdev_nbd 00:20:33.898 ************************************ 00:20:33.898 09:17:28 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:33.898 09:17:28 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:20:33.898 09:17:28 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:20:33.898 09:17:28 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:33.898 09:17:28 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:33.898 09:17:28 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.898 09:17:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:33.898 ************************************ 00:20:33.898 START TEST bdev_fio 00:20:33.898 ************************************ 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:33.898 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:33.898 09:17:28 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:33.898 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:20:33.899 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:20:33.899 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:33.899 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:20:33.899 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:34.157 ************************************ 00:20:34.157 START TEST bdev_fio_rw_verify 00:20:34.157 ************************************ 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:34.157 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:34.158 09:17:29 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:34.416 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:34.416 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:34.416 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:34.416 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:34.416 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:34.416 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:34.416 fio-3.35 00:20:34.416 Starting 6 threads 00:20:46.619 00:20:46.619 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74542: Wed Nov 20 09:17:40 2024 00:20:46.619 read: IOPS=29.2k, BW=114MiB/s (120MB/s)(1140MiB/10001msec) 00:20:46.619 slat (usec): min=2, max=1526, avg= 9.25, stdev= 9.33 00:20:46.619 clat (usec): min=93, max=297097, avg=612.45, stdev=1568.42 00:20:46.619 lat (usec): min=105, max=297104, avg=621.70, stdev=1568.71 00:20:46.619 clat percentiles (usec): 00:20:46.619 | 50.000th=[ 603], 99.000th=[ 1139], 99.900th=[ 1663], 00:20:46.619 | 99.990th=[ 4015], 99.999th=[295699] 00:20:46.619 write: IOPS=29.6k, BW=115MiB/s (121MB/s)(1155MiB/10001msec); 0 zone resets 00:20:46.619 slat (usec): min=10, max=2517, avg=29.80, stdev=33.48 00:20:46.619 clat (usec): min=88, max=4847, avg=734.36, stdev=250.28 00:20:46.619 lat (usec): min=109, max=4872, avg=764.16, stdev=253.77 00:20:46.619 clat percentiles (usec): 00:20:46.619 | 50.000th=[ 734], 99.000th=[ 1401], 99.900th=[ 2147], 99.990th=[ 3523], 00:20:46.619 | 99.999th=[ 4621] 00:20:46.619 bw ( KiB/s): min=90095, max=144323, per=99.65%, avg=117806.11, stdev=2584.76, samples=114 00:20:46.619 iops : min=22523, max=36080, avg=29451.11, stdev=646.17, samples=114 00:20:46.619 lat (usec) : 100=0.01%, 250=2.99%, 500=22.73%, 750=37.47%, 1000=29.21% 00:20:46.619 lat (msec) : 2=7.50%, 4=0.09%, 10=0.01%, 500=0.01% 00:20:46.619 cpu : usr=55.24%, sys=29.60%, ctx=7627, majf=0, minf=24850 00:20:46.619 IO depths : 1=11.3%, 2=23.6%, 4=51.2%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:46.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.619 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:46.619 issued rwts: total=291891,295588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:46.619 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:46.619 00:20:46.619 Run status group 0 (all jobs): 00:20:46.619 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1140MiB (1196MB), run=10001-10001msec 00:20:46.619 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1155MiB (1211MB), run=10001-10001msec 00:20:46.619 ----------------------------------------------------- 00:20:46.619 Suppressions used: 00:20:46.619 count bytes template 00:20:46.619 6 48 /usr/src/fio/parse.c 00:20:46.619 3504 336384 /usr/src/fio/iolog.c 00:20:46.619 1 8 libtcmalloc_minimal.so 00:20:46.619 1 904 libcrypto.so 00:20:46.619 ----------------------------------------------------- 00:20:46.619 00:20:46.619 00:20:46.619 real 0m12.465s 00:20:46.619 user 0m35.056s 00:20:46.619 sys 0m18.200s 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:46.619 ************************************ 00:20:46.619 END TEST bdev_fio_rw_verify 00:20:46.619 ************************************ 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "1c79e1d9-dad6-4207-b537-3d9c4845e7dd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1c79e1d9-dad6-4207-b537-3d9c4845e7dd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f7b5ab7e-e08f-441c-811c-7baa38393ea1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f7b5ab7e-e08f-441c-811c-7baa38393ea1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "2f1815be-16ea-4312-816e-087c13555dc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2f1815be-16ea-4312-816e-087c13555dc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "40ef16e0-bc33-4f8a-a370-7ef04e2266f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "40ef16e0-bc33-4f8a-a370-7ef04e2266f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "f5a5a62e-62cc-4857-9760-9840179c6972"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f5a5a62e-62cc-4857-9760-9840179c6972",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "86f463e2-5118-41f5-b78c-0e417d530744"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "86f463e2-5118-41f5-b78c-0e417d530744",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:46.619 /home/vagrant/spdk_repo/spdk 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:46.619 00:20:46.619 real 0m12.649s 00:20:46.619 user 0m35.146s 00:20:46.619 sys 0m18.296s 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.619 09:17:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:46.619 ************************************ 00:20:46.619 END TEST bdev_fio 00:20:46.619 ************************************ 00:20:46.619 09:17:41 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:46.619 09:17:41 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:46.619 09:17:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:46.619 09:17:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.619 09:17:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:46.619 ************************************ 00:20:46.619 START TEST bdev_verify 00:20:46.619 ************************************ 00:20:46.619 09:17:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:46.886 [2024-11-20 09:17:41.764590] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:20:46.886 [2024-11-20 09:17:41.764807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74707 ] 00:20:46.886 [2024-11-20 09:17:41.952970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:47.164 [2024-11-20 09:17:42.091943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.164 [2024-11-20 09:17:42.091944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.732 Running I/O for 5 seconds... 00:20:50.040 24640.00 IOPS, 96.25 MiB/s [2024-11-20T09:17:46.096Z] 24127.50 IOPS, 94.25 MiB/s [2024-11-20T09:17:47.032Z] 23413.00 IOPS, 91.46 MiB/s [2024-11-20T09:17:47.966Z] 23433.00 IOPS, 91.54 MiB/s [2024-11-20T09:17:47.966Z] 23169.60 IOPS, 90.51 MiB/s 00:20:52.846 Latency(us) 00:20:52.846 [2024-11-20T09:17:47.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.846 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.846 Verification LBA range: start 0x0 length 0x80000 00:20:52.846 nvme0n1 : 5.05 1773.02 6.93 0.00 0.00 72088.99 15192.44 63867.81 00:20:52.846 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.846 Verification LBA range: start 0x80000 length 0x80000 00:20:52.846 nvme0n1 : 5.06 1617.73 6.32 0.00 0.00 78682.13 10724.07 68634.07 00:20:52.846 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.846 Verification LBA range: start 0x0 length 0x80000 00:20:52.846 nvme0n2 : 5.03 1780.95 6.96 0.00 0.00 71686.52 12153.95 76736.70 00:20:52.846 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.846 Verification LBA range: start 0x80000 length 0x80000 00:20:52.846 nvme0n2 : 5.06 1619.63 6.33 0.00 0.00 78472.43 7626.01 67680.81 00:20:52.846 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.846 Verification LBA range: start 0x0 length 0x80000 00:20:52.846 nvme0n3 : 5.06 1772.32 6.92 0.00 0.00 71982.90 17158.52 62437.93 00:20:52.846 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.846 Verification LBA range: start 0x80000 length 0x80000 00:20:52.846 nvme0n3 : 5.07 1616.95 6.32 0.00 0.00 78509.17 12213.53 67204.19 00:20:52.847 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.847 Verification LBA range: start 0x0 length 0x20000 00:20:52.847 nvme1n1 : 5.06 1771.51 6.92 0.00 0.00 71931.79 11200.70 73876.95 00:20:52.847 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.847 Verification LBA range: start 0x20000 length 0x20000 00:20:52.847 nvme1n1 : 5.07 1616.58 6.31 0.00 0.00 78461.70 5928.03 72923.69 00:20:52.847 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.847 Verification LBA range: start 0x0 length 0xbd0bd 00:20:52.847 nvme2n1 : 5.06 3123.53 12.20 0.00 0.00 40631.67 3991.74 59578.18 00:20:52.847 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.847 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:52.847 nvme2n1 : 5.05 2954.00 11.54 0.00 0.00 43149.00 4527.94 60531.43 00:20:52.847 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:52.847 Verification LBA range: start 0x0 length 0xa0000 00:20:52.847 nvme3n1 : 5.08 1764.76 6.89 0.00 0.00 71962.00 7387.69 67680.81 00:20:52.847 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:52.847 Verification LBA range: start 0xa0000 length 0xa0000 00:20:52.847 nvme3n1 : 5.05 1545.74 6.04 0.00 0.00 82459.14 8757.99 91988.71 00:20:52.847 [2024-11-20T09:17:47.967Z] =================================================================================================================== 00:20:52.847 [2024-11-20T09:17:47.967Z] Total : 22956.71 89.67 0.00 0.00 66541.30 3991.74 91988.71 00:20:53.780 00:20:53.780 real 0m7.035s 00:20:53.780 user 0m10.845s 00:20:53.780 sys 0m2.059s 00:20:53.780 09:17:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.780 09:17:48 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:53.780 ************************************ 00:20:53.780 END TEST bdev_verify 00:20:53.780 ************************************ 00:20:53.780 09:17:48 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:53.780 09:17:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:20:53.780 09:17:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.780 09:17:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:53.780 ************************************ 00:20:53.780 START TEST bdev_verify_big_io 00:20:53.780 ************************************ 00:20:53.780 09:17:48 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:53.780 [2024-11-20 09:17:48.846119] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:20:53.780 [2024-11-20 09:17:48.846292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74806 ] 00:20:54.039 [2024-11-20 09:17:49.029181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:54.039 [2024-11-20 09:17:49.147174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.039 [2024-11-20 09:17:49.147191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.604 Running I/O for 5 seconds... 00:21:00.694 2472.00 IOPS, 154.50 MiB/s [2024-11-20T09:17:55.814Z] 3952.00 IOPS, 247.00 MiB/s 00:21:00.694 Latency(us) 00:21:00.694 [2024-11-20T09:17:55.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.694 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x0 length 0x8000 00:21:00.694 nvme0n1 : 5.74 133.71 8.36 0.00 0.00 937435.38 83409.45 1075267.03 00:21:00.694 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x8000 length 0x8000 00:21:00.694 nvme0n1 : 5.84 131.57 8.22 0.00 0.00 954931.67 15490.33 999006.95 00:21:00.694 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x0 length 0x8000 00:21:00.694 nvme0n2 : 5.75 122.50 7.66 0.00 0.00 1000651.78 91988.71 1143901.09 00:21:00.694 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x8000 length 0x8000 00:21:00.694 nvme0n2 : 5.85 150.53 9.41 0.00 0.00 820725.88 19899.11 1052389.00 00:21:00.694 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x0 length 0x8000 00:21:00.694 nvme0n3 : 5.81 140.45 8.78 0.00 0.00 843982.19 81979.58 1479445.41 00:21:00.694 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x8000 length 0x8000 00:21:00.694 nvme0n3 : 5.84 106.84 6.68 0.00 0.00 1121062.26 5659.93 2699606.57 00:21:00.694 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:00.694 Verification LBA range: start 0x0 length 0x2000 00:21:00.695 nvme1n1 : 5.75 118.25 7.39 0.00 0.00 975934.08 94848.47 1525201.45 00:21:00.695 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:00.695 Verification LBA range: start 0x2000 length 0x2000 00:21:00.695 nvme1n1 : 5.85 129.95 8.12 0.00 0.00 901384.26 21090.68 754974.72 00:21:00.695 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:00.695 Verification LBA range: start 0x0 length 0xbd0b 00:21:00.695 nvme2n1 : 5.82 187.02 11.69 0.00 0.00 596452.73 62437.93 770226.73 00:21:00.695 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:00.695 Verification LBA range: start 0xbd0b length 0xbd0b 00:21:00.695 nvme2n1 : 5.83 183.82 11.49 0.00 0.00 619902.65 48377.48 663462.63 00:21:00.695 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:00.695 Verification LBA range: start 0x0 length 0xa000 00:21:00.695 nvme3n1 : 5.82 195.06 12.19 0.00 0.00 562618.05 2591.65 758787.72 00:21:00.695 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:00.695 Verification LBA range: start 0xa000 length 0xa000 00:21:00.695 nvme3n1 : 5.85 150.32 9.40 0.00 0.00 738901.07 8519.68 1235413.18 00:21:00.695 [2024-11-20T09:17:55.815Z] =================================================================================================================== 00:21:00.695 [2024-11-20T09:17:55.815Z] Total : 1750.01 109.38 0.00 0.00 807763.20 2591.65 2699606.57 00:21:01.629 00:21:01.629 real 0m7.880s 00:21:01.629 user 0m14.232s 00:21:01.629 sys 0m0.608s 00:21:01.629 09:17:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.630 09:17:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:01.630 ************************************ 00:21:01.630 END TEST bdev_verify_big_io 00:21:01.630 ************************************ 00:21:01.630 09:17:56 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:01.630 09:17:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:01.630 09:17:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.630 09:17:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:01.630 ************************************ 00:21:01.630 START TEST bdev_write_zeroes 00:21:01.630 ************************************ 00:21:01.630 09:17:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:01.888 [2024-11-20 09:17:56.781689] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:01.888 [2024-11-20 09:17:56.781868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74916 ] 00:21:01.888 [2024-11-20 09:17:56.964521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.147 [2024-11-20 09:17:57.063826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.406 Running I/O for 1 seconds... 00:21:03.784 60704.00 IOPS, 237.12 MiB/s 00:21:03.784 Latency(us) 00:21:03.784 [2024-11-20T09:17:58.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.784 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:03.784 nvme0n1 : 1.03 8969.58 35.04 0.00 0.00 14255.20 8102.63 23592.96 00:21:03.784 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:03.784 nvme0n2 : 1.03 8956.16 34.98 0.00 0.00 14264.83 8400.52 21805.61 00:21:03.784 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:03.784 nvme0n3 : 1.03 8943.29 34.93 0.00 0.00 14274.21 8460.10 22401.40 00:21:03.784 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:03.784 nvme1n1 : 1.03 8930.45 34.88 0.00 0.00 14283.83 8400.52 24665.37 00:21:03.784 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:03.784 nvme2n1 : 1.03 15607.35 60.97 0.00 0.00 8148.54 4081.11 24665.37 00:21:03.784 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:03.784 nvme3n1 : 1.03 8905.72 34.79 0.00 0.00 14244.85 3961.95 27644.28 00:21:03.784 [2024-11-20T09:17:58.904Z] =================================================================================================================== 00:21:03.784 [2024-11-20T09:17:58.904Z] Total : 60312.55 235.60 0.00 0.00 12678.94 3961.95 27644.28 00:21:04.351 00:21:04.351 real 0m2.744s 00:21:04.351 user 0m1.952s 00:21:04.351 sys 0m0.598s 00:21:04.351 09:17:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.351 09:17:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:04.351 ************************************ 00:21:04.351 END TEST bdev_write_zeroes 00:21:04.351 ************************************ 00:21:04.351 09:17:59 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:04.351 09:17:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:04.351 09:17:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.351 09:17:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:04.610 ************************************ 00:21:04.610 START TEST bdev_json_nonenclosed 00:21:04.610 ************************************ 00:21:04.610 09:17:59 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:04.610 [2024-11-20 09:17:59.581950] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:04.610 [2024-11-20 09:17:59.582157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74964 ] 00:21:04.869 [2024-11-20 09:17:59.765725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.869 [2024-11-20 09:17:59.877298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.869 [2024-11-20 09:17:59.877433] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:04.869 [2024-11-20 09:17:59.877459] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:04.869 [2024-11-20 09:17:59.877471] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:05.127 00:21:05.127 real 0m0.616s 00:21:05.127 user 0m0.365s 00:21:05.127 sys 0m0.146s 00:21:05.127 09:18:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.127 09:18:00 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:05.127 ************************************ 00:21:05.127 END TEST bdev_json_nonenclosed 00:21:05.127 ************************************ 00:21:05.127 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:05.127 09:18:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:05.127 09:18:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.127 09:18:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:05.127 ************************************ 00:21:05.127 START TEST bdev_json_nonarray 00:21:05.127 ************************************ 00:21:05.128 09:18:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:05.386 [2024-11-20 09:18:00.252340] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:05.386 [2024-11-20 09:18:00.252519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74989 ] 00:21:05.386 [2024-11-20 09:18:00.430668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.645 [2024-11-20 09:18:00.528059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.645 [2024-11-20 09:18:00.528188] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:05.645 [2024-11-20 09:18:00.528218] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:05.645 [2024-11-20 09:18:00.528232] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:05.645 00:21:05.645 real 0m0.585s 00:21:05.645 user 0m0.339s 00:21:05.645 sys 0m0.142s 00:21:05.645 09:18:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.645 09:18:00 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:05.645 ************************************ 00:21:05.645 END TEST bdev_json_nonarray 00:21:05.645 ************************************ 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:21:05.904 09:18:00 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:06.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:07.040 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.299 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.299 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.299 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.299 00:21:07.299 real 0m55.474s 00:21:07.299 user 1m33.907s 00:21:07.299 sys 0m29.791s 00:21:07.299 09:18:02 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.299 ************************************ 00:21:07.299 09:18:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:07.299 END TEST blockdev_xnvme 00:21:07.299 ************************************ 00:21:07.299 09:18:02 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:07.299 09:18:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.299 09:18:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.299 09:18:02 -- common/autotest_common.sh@10 -- # set +x 00:21:07.299 ************************************ 00:21:07.299 START TEST ublk 00:21:07.299 ************************************ 00:21:07.299 09:18:02 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:21:07.558 * Looking for test storage... 00:21:07.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:07.558 09:18:02 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.559 09:18:02 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.559 09:18:02 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.559 09:18:02 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.559 09:18:02 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.559 09:18:02 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.559 09:18:02 ublk -- scripts/common.sh@344 -- # case "$op" in 00:21:07.559 09:18:02 ublk -- scripts/common.sh@345 -- # : 1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.559 09:18:02 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.559 09:18:02 ublk -- scripts/common.sh@365 -- # decimal 1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@353 -- # local d=1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.559 09:18:02 ublk -- scripts/common.sh@355 -- # echo 1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.559 09:18:02 ublk -- scripts/common.sh@366 -- # decimal 2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@353 -- # local d=2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.559 09:18:02 ublk -- scripts/common.sh@355 -- # echo 2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.559 09:18:02 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.559 09:18:02 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.559 09:18:02 ublk -- scripts/common.sh@368 -- # return 0 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.559 --rc genhtml_branch_coverage=1 00:21:07.559 --rc genhtml_function_coverage=1 00:21:07.559 --rc genhtml_legend=1 00:21:07.559 --rc geninfo_all_blocks=1 00:21:07.559 --rc geninfo_unexecuted_blocks=1 00:21:07.559 00:21:07.559 ' 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.559 --rc genhtml_branch_coverage=1 00:21:07.559 --rc genhtml_function_coverage=1 00:21:07.559 --rc genhtml_legend=1 00:21:07.559 --rc geninfo_all_blocks=1 00:21:07.559 --rc geninfo_unexecuted_blocks=1 00:21:07.559 00:21:07.559 ' 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.559 --rc genhtml_branch_coverage=1 00:21:07.559 --rc genhtml_function_coverage=1 00:21:07.559 --rc genhtml_legend=1 00:21:07.559 --rc geninfo_all_blocks=1 00:21:07.559 --rc geninfo_unexecuted_blocks=1 00:21:07.559 00:21:07.559 ' 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.559 --rc genhtml_branch_coverage=1 00:21:07.559 --rc genhtml_function_coverage=1 00:21:07.559 --rc genhtml_legend=1 00:21:07.559 --rc geninfo_all_blocks=1 00:21:07.559 --rc geninfo_unexecuted_blocks=1 00:21:07.559 00:21:07.559 ' 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:07.559 09:18:02 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:07.559 09:18:02 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:07.559 09:18:02 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:07.559 09:18:02 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:07.559 09:18:02 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:07.559 09:18:02 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:07.559 09:18:02 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:07.559 09:18:02 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:21:07.559 09:18:02 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.559 09:18:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:07.559 ************************************ 00:21:07.559 START TEST test_save_ublk_config 00:21:07.559 ************************************ 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75279 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75279 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75279 ']' 00:21:07.559 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.560 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.560 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.560 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.560 09:18:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:07.818 [2024-11-20 09:18:02.723315] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:07.818 [2024-11-20 09:18:02.723550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75279 ] 00:21:07.818 [2024-11-20 09:18:02.899211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.076 [2024-11-20 09:18:02.999256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.643 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.643 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:08.643 09:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:21:08.643 09:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:21:08.643 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.643 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:08.903 [2024-11-20 09:18:03.770681] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:08.903 [2024-11-20 09:18:03.771777] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:08.903 malloc0 00:21:08.903 [2024-11-20 09:18:03.841796] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:08.903 [2024-11-20 09:18:03.841912] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:08.903 [2024-11-20 09:18:03.841933] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:08.903 [2024-11-20 09:18:03.841943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:08.903 [2024-11-20 09:18:03.849849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:08.903 [2024-11-20 09:18:03.849900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:08.903 [2024-11-20 09:18:03.857683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:08.903 [2024-11-20 09:18:03.857810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:08.903 [2024-11-20 09:18:03.874713] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:08.903 0 00:21:08.903 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.903 09:18:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:21:08.903 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.903 09:18:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:09.163 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.163 09:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:21:09.163 "subsystems": [ 00:21:09.163 { 00:21:09.163 "subsystem": "fsdev", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "fsdev_set_opts", 00:21:09.163 "params": { 00:21:09.163 "fsdev_io_pool_size": 65535, 00:21:09.163 "fsdev_io_cache_size": 256 00:21:09.163 } 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "keyring", 00:21:09.163 "config": [] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "iobuf", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "iobuf_set_options", 00:21:09.163 "params": { 00:21:09.163 "small_pool_count": 8192, 00:21:09.163 "large_pool_count": 1024, 00:21:09.163 "small_bufsize": 8192, 00:21:09.163 "large_bufsize": 135168, 00:21:09.163 "enable_numa": false 00:21:09.163 } 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "sock", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "sock_set_default_impl", 00:21:09.163 "params": { 00:21:09.163 "impl_name": "posix" 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "sock_impl_set_options", 00:21:09.163 "params": { 00:21:09.163 "impl_name": "ssl", 00:21:09.163 "recv_buf_size": 4096, 00:21:09.163 "send_buf_size": 4096, 00:21:09.163 "enable_recv_pipe": true, 00:21:09.163 "enable_quickack": false, 00:21:09.163 "enable_placement_id": 0, 00:21:09.163 "enable_zerocopy_send_server": true, 00:21:09.163 "enable_zerocopy_send_client": false, 00:21:09.163 "zerocopy_threshold": 0, 00:21:09.163 "tls_version": 0, 00:21:09.163 "enable_ktls": false 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "sock_impl_set_options", 00:21:09.163 "params": { 00:21:09.163 "impl_name": "posix", 00:21:09.163 "recv_buf_size": 2097152, 00:21:09.163 "send_buf_size": 2097152, 00:21:09.163 "enable_recv_pipe": true, 00:21:09.163 "enable_quickack": false, 00:21:09.163 "enable_placement_id": 0, 00:21:09.163 "enable_zerocopy_send_server": true, 00:21:09.163 "enable_zerocopy_send_client": false, 00:21:09.163 "zerocopy_threshold": 0, 00:21:09.163 "tls_version": 0, 00:21:09.163 "enable_ktls": false 00:21:09.163 } 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "vmd", 00:21:09.163 "config": [] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "accel", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "accel_set_options", 00:21:09.163 "params": { 00:21:09.163 "small_cache_size": 128, 00:21:09.163 "large_cache_size": 16, 00:21:09.163 "task_count": 2048, 00:21:09.163 "sequence_count": 2048, 00:21:09.163 "buf_count": 2048 00:21:09.163 } 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "bdev", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "bdev_set_options", 00:21:09.163 "params": { 00:21:09.163 "bdev_io_pool_size": 65535, 00:21:09.163 "bdev_io_cache_size": 256, 00:21:09.163 "bdev_auto_examine": true, 00:21:09.163 "iobuf_small_cache_size": 128, 00:21:09.163 "iobuf_large_cache_size": 16 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "bdev_raid_set_options", 00:21:09.163 "params": { 00:21:09.163 "process_window_size_kb": 1024, 00:21:09.163 "process_max_bandwidth_mb_sec": 0 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "bdev_iscsi_set_options", 00:21:09.163 "params": { 00:21:09.163 "timeout_sec": 30 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "bdev_nvme_set_options", 00:21:09.163 "params": { 00:21:09.163 "action_on_timeout": "none", 00:21:09.163 "timeout_us": 0, 00:21:09.163 "timeout_admin_us": 0, 00:21:09.163 "keep_alive_timeout_ms": 10000, 00:21:09.163 "arbitration_burst": 0, 00:21:09.163 "low_priority_weight": 0, 00:21:09.163 "medium_priority_weight": 0, 00:21:09.163 "high_priority_weight": 0, 00:21:09.163 "nvme_adminq_poll_period_us": 10000, 00:21:09.163 "nvme_ioq_poll_period_us": 0, 00:21:09.163 "io_queue_requests": 0, 00:21:09.163 "delay_cmd_submit": true, 00:21:09.163 "transport_retry_count": 4, 00:21:09.163 "bdev_retry_count": 3, 00:21:09.163 "transport_ack_timeout": 0, 00:21:09.163 "ctrlr_loss_timeout_sec": 0, 00:21:09.163 "reconnect_delay_sec": 0, 00:21:09.163 "fast_io_fail_timeout_sec": 0, 00:21:09.163 "disable_auto_failback": false, 00:21:09.163 "generate_uuids": false, 00:21:09.163 "transport_tos": 0, 00:21:09.163 "nvme_error_stat": false, 00:21:09.163 "rdma_srq_size": 0, 00:21:09.163 "io_path_stat": false, 00:21:09.163 "allow_accel_sequence": false, 00:21:09.163 "rdma_max_cq_size": 0, 00:21:09.163 "rdma_cm_event_timeout_ms": 0, 00:21:09.163 "dhchap_digests": [ 00:21:09.163 "sha256", 00:21:09.163 "sha384", 00:21:09.163 "sha512" 00:21:09.163 ], 00:21:09.163 "dhchap_dhgroups": [ 00:21:09.163 "null", 00:21:09.163 "ffdhe2048", 00:21:09.163 "ffdhe3072", 00:21:09.163 "ffdhe4096", 00:21:09.163 "ffdhe6144", 00:21:09.163 "ffdhe8192" 00:21:09.163 ] 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "bdev_nvme_set_hotplug", 00:21:09.163 "params": { 00:21:09.163 "period_us": 100000, 00:21:09.163 "enable": false 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "bdev_malloc_create", 00:21:09.163 "params": { 00:21:09.163 "name": "malloc0", 00:21:09.163 "num_blocks": 8192, 00:21:09.163 "block_size": 4096, 00:21:09.163 "physical_block_size": 4096, 00:21:09.163 "uuid": "66836b77-7b1e-434e-a0a3-eb1918a27813", 00:21:09.163 "optimal_io_boundary": 0, 00:21:09.163 "md_size": 0, 00:21:09.163 "dif_type": 0, 00:21:09.163 "dif_is_head_of_md": false, 00:21:09.163 "dif_pi_format": 0 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "bdev_wait_for_examine" 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "scsi", 00:21:09.163 "config": null 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "scheduler", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "framework_set_scheduler", 00:21:09.163 "params": { 00:21:09.163 "name": "static" 00:21:09.163 } 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "vhost_scsi", 00:21:09.163 "config": [] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "vhost_blk", 00:21:09.163 "config": [] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "ublk", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "ublk_create_target", 00:21:09.163 "params": { 00:21:09.163 "cpumask": "1" 00:21:09.163 } 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "method": "ublk_start_disk", 00:21:09.163 "params": { 00:21:09.163 "bdev_name": "malloc0", 00:21:09.163 "ublk_id": 0, 00:21:09.163 "num_queues": 1, 00:21:09.163 "queue_depth": 128 00:21:09.163 } 00:21:09.163 } 00:21:09.163 ] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "nbd", 00:21:09.163 "config": [] 00:21:09.163 }, 00:21:09.163 { 00:21:09.163 "subsystem": "nvmf", 00:21:09.163 "config": [ 00:21:09.163 { 00:21:09.163 "method": "nvmf_set_config", 00:21:09.163 "params": { 00:21:09.163 "discovery_filter": "match_any", 00:21:09.163 "admin_cmd_passthru": { 00:21:09.163 "identify_ctrlr": false 00:21:09.163 }, 00:21:09.163 "dhchap_digests": [ 00:21:09.163 "sha256", 00:21:09.163 "sha384", 00:21:09.163 "sha512" 00:21:09.163 ], 00:21:09.163 "dhchap_dhgroups": [ 00:21:09.163 "null", 00:21:09.163 "ffdhe2048", 00:21:09.163 "ffdhe3072", 00:21:09.163 "ffdhe4096", 00:21:09.163 "ffdhe6144", 00:21:09.164 "ffdhe8192" 00:21:09.164 ] 00:21:09.164 } 00:21:09.164 }, 00:21:09.164 { 00:21:09.164 "method": "nvmf_set_max_subsystems", 00:21:09.164 "params": { 00:21:09.164 "max_subsystems": 1024 00:21:09.164 } 00:21:09.164 }, 00:21:09.164 { 00:21:09.164 "method": "nvmf_set_crdt", 00:21:09.164 "params": { 00:21:09.164 "crdt1": 0, 00:21:09.164 "crdt2": 0, 00:21:09.164 "crdt3": 0 00:21:09.164 } 00:21:09.164 } 00:21:09.164 ] 00:21:09.164 }, 00:21:09.164 { 00:21:09.164 "subsystem": "iscsi", 00:21:09.164 "config": [ 00:21:09.164 { 00:21:09.164 "method": "iscsi_set_options", 00:21:09.164 "params": { 00:21:09.164 "node_base": "iqn.2016-06.io.spdk", 00:21:09.164 "max_sessions": 128, 00:21:09.164 "max_connections_per_session": 2, 00:21:09.164 "max_queue_depth": 64, 00:21:09.164 "default_time2wait": 2, 00:21:09.164 "default_time2retain": 20, 00:21:09.164 "first_burst_length": 8192, 00:21:09.164 "immediate_data": true, 00:21:09.164 "allow_duplicated_isid": false, 00:21:09.164 "error_recovery_level": 0, 00:21:09.164 "nop_timeout": 60, 00:21:09.164 "nop_in_interval": 30, 00:21:09.164 "disable_chap": false, 00:21:09.164 "require_chap": false, 00:21:09.164 "mutual_chap": false, 00:21:09.164 "chap_group": 0, 00:21:09.164 "max_large_datain_per_connection": 64, 00:21:09.164 "max_r2t_per_connection": 4, 00:21:09.164 "pdu_pool_size": 36864, 00:21:09.164 "immediate_data_pool_size": 16384, 00:21:09.164 "data_out_pool_size": 2048 00:21:09.164 } 00:21:09.164 } 00:21:09.164 ] 00:21:09.164 } 00:21:09.164 ] 00:21:09.164 }' 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75279 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75279 ']' 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75279 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75279 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.164 killing process with pid 75279 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75279' 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75279 00:21:09.164 09:18:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75279 00:21:10.542 [2024-11-20 09:18:05.368033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:10.542 [2024-11-20 09:18:05.403690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:10.542 [2024-11-20 09:18:05.403876] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:10.542 [2024-11-20 09:18:05.412691] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:10.542 [2024-11-20 09:18:05.412749] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:10.542 [2024-11-20 09:18:05.412797] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:10.542 [2024-11-20 09:18:05.412833] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:10.542 [2024-11-20 09:18:05.413027] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75335 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75335 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75335 ']' 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:21:11.922 09:18:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:21:11.922 "subsystems": [ 00:21:11.922 { 00:21:11.922 "subsystem": "fsdev", 00:21:11.922 "config": [ 00:21:11.922 { 00:21:11.922 "method": "fsdev_set_opts", 00:21:11.922 "params": { 00:21:11.922 "fsdev_io_pool_size": 65535, 00:21:11.922 "fsdev_io_cache_size": 256 00:21:11.922 } 00:21:11.922 } 00:21:11.922 ] 00:21:11.922 }, 00:21:11.922 { 00:21:11.922 "subsystem": "keyring", 00:21:11.922 "config": [] 00:21:11.922 }, 00:21:11.922 { 00:21:11.922 "subsystem": "iobuf", 00:21:11.922 "config": [ 00:21:11.922 { 00:21:11.923 "method": "iobuf_set_options", 00:21:11.923 "params": { 00:21:11.923 "small_pool_count": 8192, 00:21:11.923 "large_pool_count": 1024, 00:21:11.923 "small_bufsize": 8192, 00:21:11.923 "large_bufsize": 135168, 00:21:11.923 "enable_numa": false 00:21:11.923 } 00:21:11.923 } 00:21:11.923 ] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "sock", 00:21:11.923 "config": [ 00:21:11.923 { 00:21:11.923 "method": "sock_set_default_impl", 00:21:11.923 "params": { 00:21:11.923 "impl_name": "posix" 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "sock_impl_set_options", 00:21:11.923 "params": { 00:21:11.923 "impl_name": "ssl", 00:21:11.923 "recv_buf_size": 4096, 00:21:11.923 "send_buf_size": 4096, 00:21:11.923 "enable_recv_pipe": true, 00:21:11.923 "enable_quickack": false, 00:21:11.923 "enable_placement_id": 0, 00:21:11.923 "enable_zerocopy_send_server": true, 00:21:11.923 "enable_zerocopy_send_client": false, 00:21:11.923 "zerocopy_threshold": 0, 00:21:11.923 "tls_version": 0, 00:21:11.923 "enable_ktls": false 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "sock_impl_set_options", 00:21:11.923 "params": { 00:21:11.923 "impl_name": "posix", 00:21:11.923 "recv_buf_size": 2097152, 00:21:11.923 "send_buf_size": 2097152, 00:21:11.923 "enable_recv_pipe": true, 00:21:11.923 "enable_quickack": false, 00:21:11.923 "enable_placement_id": 0, 00:21:11.923 "enable_zerocopy_send_server": true, 00:21:11.923 "enable_zerocopy_send_client": false, 00:21:11.923 "zerocopy_threshold": 0, 00:21:11.923 "tls_version": 0, 00:21:11.923 "enable_ktls": false 00:21:11.923 } 00:21:11.923 } 00:21:11.923 ] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "vmd", 00:21:11.923 "config": [] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "accel", 00:21:11.923 "config": [ 00:21:11.923 { 00:21:11.923 "method": "accel_set_options", 00:21:11.923 "params": { 00:21:11.923 "small_cache_size": 128, 00:21:11.923 "large_cache_size": 16, 00:21:11.923 "task_count": 2048, 00:21:11.923 "sequence_count": 2048, 00:21:11.923 "buf_count": 2048 00:21:11.923 } 00:21:11.923 } 00:21:11.923 ] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "bdev", 00:21:11.923 "config": [ 00:21:11.923 { 00:21:11.923 "method": "bdev_set_options", 00:21:11.923 "params": { 00:21:11.923 "bdev_io_pool_size": 65535, 00:21:11.923 "bdev_io_cache_size": 256, 00:21:11.923 "bdev_auto_examine": true, 00:21:11.923 "iobuf_small_cache_size": 128, 00:21:11.923 "iobuf_large_cache_size": 16 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "bdev_raid_set_options", 00:21:11.923 "params": { 00:21:11.923 "process_window_size_kb": 1024, 00:21:11.923 "process_max_bandwidth_mb_sec": 0 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "bdev_iscsi_set_options", 00:21:11.923 "params": { 00:21:11.923 "timeout_sec": 30 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "bdev_nvme_set_options", 00:21:11.923 "params": { 00:21:11.923 "action_on_timeout": "none", 00:21:11.923 "timeout_us": 0, 00:21:11.923 "timeout_admin_us": 0, 00:21:11.923 "keep_alive_timeout_ms": 10000, 00:21:11.923 "arbitration_burst": 0, 00:21:11.923 "low_priority_weight": 0, 00:21:11.923 "medium_priority_weight": 0, 00:21:11.923 "high_priority_weight": 0, 00:21:11.923 "nvme_adminq_poll_period_us": 10000, 00:21:11.923 "nvme_ioq_poll_period_us": 0, 00:21:11.923 "io_queue_requests": 0, 00:21:11.923 "delay_cmd_submit": true, 00:21:11.923 "transport_retry_count": 4, 00:21:11.923 "bdev_retry_count": 3, 00:21:11.923 "transport_ack_timeout": 0, 00:21:11.923 "ctrlr_loss_timeout_sec": 0, 00:21:11.923 "reconnect_delay_sec": 0, 00:21:11.923 "fast_io_fail_timeout_sec": 0, 00:21:11.923 "disable_auto_failback": false, 00:21:11.923 "generate_uuids": false, 00:21:11.923 "transport_tos": 0, 00:21:11.923 "nvme_error_stat": false, 00:21:11.923 "rdma_srq_size": 0, 00:21:11.923 "io_path_stat": false, 00:21:11.923 "allow_accel_sequence": false, 00:21:11.923 "rdma_max_cq_size": 0, 00:21:11.923 "rdma_cm_event_timeout_ms": 0, 00:21:11.923 "dhchap_digests": [ 00:21:11.923 "sha256", 00:21:11.923 "sha384", 00:21:11.923 "sha512" 00:21:11.923 ], 00:21:11.923 "dhchap_dhgroups": [ 00:21:11.923 "null", 00:21:11.923 "ffdhe2048", 00:21:11.923 "ffdhe3072", 00:21:11.923 "ffdhe4096", 00:21:11.923 "ffdhe6144", 00:21:11.923 "ffdhe8192" 00:21:11.923 ] 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "bdev_nvme_set_hotplug", 00:21:11.923 "params": { 00:21:11.923 "period_us": 100000, 00:21:11.923 "enable": false 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "bdev_malloc_create", 00:21:11.923 "params": { 00:21:11.923 "name": "malloc0", 00:21:11.923 "num_blocks": 8192, 00:21:11.923 "block_size": 4096, 00:21:11.923 "physical_block_size": 4096, 00:21:11.923 "uuid": "66836b77-7b1e-434e-a0a3-eb1918a27813", 00:21:11.923 "optimal_io_boundary": 0, 00:21:11.923 "md_size": 0, 00:21:11.923 "dif_type": 0, 00:21:11.923 "dif_is_head_of_md": false, 00:21:11.923 "dif_pi_format": 0 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "bdev_wait_for_examine" 00:21:11.923 } 00:21:11.923 ] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "scsi", 00:21:11.923 "config": null 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "scheduler", 00:21:11.923 "config": [ 00:21:11.923 { 00:21:11.923 "method": "framework_set_scheduler", 00:21:11.923 "params": { 00:21:11.923 "name": "static" 00:21:11.923 } 00:21:11.923 } 00:21:11.923 ] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "vhost_scsi", 00:21:11.923 "config": [] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "vhost_blk", 00:21:11.923 "config": [] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "ublk", 00:21:11.923 "config": [ 00:21:11.923 { 00:21:11.923 "method": "ublk_create_target", 00:21:11.923 "params": { 00:21:11.923 "cpumask": "1" 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "ublk_start_disk", 00:21:11.923 "params": { 00:21:11.923 "bdev_name": "malloc0", 00:21:11.923 "ublk_id": 0, 00:21:11.923 "num_queues": 1, 00:21:11.923 "queue_depth": 128 00:21:11.923 } 00:21:11.923 } 00:21:11.923 ] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "nbd", 00:21:11.923 "config": [] 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "subsystem": "nvmf", 00:21:11.923 "config": [ 00:21:11.923 { 00:21:11.923 "method": "nvmf_set_config", 00:21:11.923 "params": { 00:21:11.923 "discovery_filter": "match_any", 00:21:11.923 "admin_cmd_passthru": { 00:21:11.923 "identify_ctrlr": false 00:21:11.923 }, 00:21:11.923 "dhchap_digests": [ 00:21:11.923 "sha256", 00:21:11.923 "sha384", 00:21:11.923 "sha512" 00:21:11.923 ], 00:21:11.923 "dhchap_dhgroups": [ 00:21:11.923 "null", 00:21:11.923 "ffdhe2048", 00:21:11.923 "ffdhe3072", 00:21:11.923 "ffdhe4096", 00:21:11.923 "ffdhe6144", 00:21:11.923 "ffdhe8192" 00:21:11.923 ] 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "nvmf_set_max_subsystems", 00:21:11.923 "params": { 00:21:11.923 "max_subsystems": 1024 00:21:11.923 } 00:21:11.923 }, 00:21:11.923 { 00:21:11.923 "method": "nvmf_set_crdt", 00:21:11.924 "params": { 00:21:11.924 "crdt1": 0, 00:21:11.924 "crdt2": 0, 00:21:11.924 "crdt3": 0 00:21:11.924 } 00:21:11.924 } 00:21:11.924 ] 00:21:11.924 }, 00:21:11.924 { 00:21:11.924 "subsystem": "iscsi", 00:21:11.924 "config": [ 00:21:11.924 { 00:21:11.924 "method": "iscsi_set_options", 00:21:11.924 "params": { 00:21:11.924 "node_base": "iqn.2016-06.io.spdk", 00:21:11.924 "max_sessions": 128, 00:21:11.924 "max_connections_per_session": 2, 00:21:11.924 "max_queue_depth": 64, 00:21:11.924 "default_time2wait": 2, 00:21:11.924 "default_time2retain": 20, 00:21:11.924 "first_burst_length": 8192, 00:21:11.924 "immediate_data": true, 00:21:11.924 "allow_duplicated_isid": false, 00:21:11.924 "error_recovery_level": 0, 00:21:11.924 "nop_timeout": 60, 00:21:11.924 "nop_in_interval": 30, 00:21:11.924 "disable_chap": false, 00:21:11.924 "require_chap": false, 00:21:11.924 "mutual_chap": false, 00:21:11.924 "chap_group": 0, 00:21:11.924 "max_large_datain_per_connection": 64, 00:21:11.924 "max_r2t_per_connection": 4, 00:21:11.924 "pdu_pool_size": 36864, 00:21:11.924 "immediate_data_pool_size": 16384, 00:21:11.924 "data_out_pool_size": 2048 00:21:11.924 } 00:21:11.924 } 00:21:11.924 ] 00:21:11.924 } 00:21:11.924 ] 00:21:11.924 }' 00:21:12.183 [2024-11-20 09:18:07.089933] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:12.183 [2024-11-20 09:18:07.090143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75335 ] 00:21:12.183 [2024-11-20 09:18:07.260767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.442 [2024-11-20 09:18:07.367108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.382 [2024-11-20 09:18:08.492699] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:13.382 [2024-11-20 09:18:08.493889] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:13.382 [2024-11-20 09:18:08.498988] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:21:13.382 [2024-11-20 09:18:08.499089] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:21:13.382 [2024-11-20 09:18:08.499108] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:13.382 [2024-11-20 09:18:08.499118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:13.642 [2024-11-20 09:18:08.506940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:13.643 [2024-11-20 09:18:08.506964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:13.643 [2024-11-20 09:18:08.512932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:13.643 [2024-11-20 09:18:08.513037] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:13.643 [2024-11-20 09:18:08.529888] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75335 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75335 ']' 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75335 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75335 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75335' 00:21:13.643 killing process with pid 75335 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75335 00:21:13.643 09:18:08 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75335 00:21:15.558 [2024-11-20 09:18:10.196183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:15.558 [2024-11-20 09:18:10.226755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:15.558 [2024-11-20 09:18:10.226927] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:15.558 [2024-11-20 09:18:10.235812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:15.558 [2024-11-20 09:18:10.235917] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:15.558 [2024-11-20 09:18:10.235929] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:15.558 [2024-11-20 09:18:10.235978] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:15.558 [2024-11-20 09:18:10.236169] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:17.029 09:18:12 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:21:17.029 00:21:17.029 real 0m9.435s 00:21:17.029 user 0m7.020s 00:21:17.029 sys 0m3.293s 00:21:17.029 09:18:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.029 09:18:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:21:17.029 ************************************ 00:21:17.029 END TEST test_save_ublk_config 00:21:17.029 ************************************ 00:21:17.029 09:18:12 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75425 00:21:17.029 09:18:12 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.029 09:18:12 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75425 00:21:17.029 09:18:12 ublk -- common/autotest_common.sh@835 -- # '[' -z 75425 ']' 00:21:17.029 09:18:12 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.029 09:18:12 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.029 09:18:12 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.029 09:18:12 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.029 09:18:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:17.029 09:18:12 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:17.288 [2024-11-20 09:18:12.199426] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:17.288 [2024-11-20 09:18:12.199614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75425 ] 00:21:17.288 [2024-11-20 09:18:12.388868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:17.546 [2024-11-20 09:18:12.549916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.546 [2024-11-20 09:18:12.549921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.484 09:18:13 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.484 09:18:13 ublk -- common/autotest_common.sh@868 -- # return 0 00:21:18.484 09:18:13 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:21:18.484 09:18:13 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:18.484 09:18:13 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.484 09:18:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:18.484 ************************************ 00:21:18.484 START TEST test_create_ublk 00:21:18.484 ************************************ 00:21:18.484 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:21:18.484 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:21:18.484 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.484 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:18.484 [2024-11-20 09:18:13.472688] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:18.484 [2024-11-20 09:18:13.475694] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:18.484 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.484 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:21:18.484 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:21:18.484 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.484 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:18.743 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.743 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:21:18.743 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:18.743 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.743 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:18.743 [2024-11-20 09:18:13.780934] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:18.743 [2024-11-20 09:18:13.781530] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:18.743 [2024-11-20 09:18:13.781555] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:18.743 [2024-11-20 09:18:13.781567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:18.743 [2024-11-20 09:18:13.789198] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:18.743 [2024-11-20 09:18:13.789237] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:18.743 [2024-11-20 09:18:13.795709] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:18.743 [2024-11-20 09:18:13.808869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:18.744 [2024-11-20 09:18:13.829817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:18.744 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.744 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:21:18.744 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:21:18.744 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:21:18.744 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.744 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:18.744 09:18:13 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.744 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:21:18.744 { 00:21:18.744 "ublk_device": "/dev/ublkb0", 00:21:18.744 "id": 0, 00:21:18.744 "queue_depth": 512, 00:21:18.744 "num_queues": 4, 00:21:18.744 "bdev_name": "Malloc0" 00:21:18.744 } 00:21:18.744 ]' 00:21:18.744 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:21:19.003 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:19.003 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:21:19.003 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:21:19.003 09:18:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:21:19.003 09:18:14 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:21:19.003 09:18:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:21:19.003 09:18:14 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:21:19.003 09:18:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:21:19.003 09:18:14 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:19.003 09:18:14 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:21:19.003 09:18:14 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:21:19.262 fio: verification read phase will never start because write phase uses all of runtime 00:21:19.262 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:21:19.262 fio-3.35 00:21:19.262 Starting 1 process 00:21:29.243 00:21:29.243 fio_test: (groupid=0, jobs=1): err= 0: pid=75477: Wed Nov 20 09:18:24 2024 00:21:29.243 write: IOPS=7075, BW=27.6MiB/s (29.0MB/s)(276MiB/10001msec); 0 zone resets 00:21:29.243 clat (usec): min=72, max=7998, avg=140.04, stdev=182.13 00:21:29.243 lat (usec): min=72, max=8001, avg=140.75, stdev=182.16 00:21:29.243 clat percentiles (usec): 00:21:29.243 | 1.00th=[ 97], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 113], 00:21:29.243 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 125], 60.00th=[ 130], 00:21:29.243 | 70.00th=[ 137], 80.00th=[ 145], 90.00th=[ 159], 95.00th=[ 172], 00:21:29.243 | 99.00th=[ 206], 99.50th=[ 249], 99.90th=[ 3458], 99.95th=[ 3916], 00:21:29.243 | 99.99th=[ 4228] 00:21:29.243 bw ( KiB/s): min=13117, max=29680, per=100.00%, avg=28317.74, stdev=3693.41, samples=19 00:21:29.243 iops : min= 3279, max= 7420, avg=7079.42, stdev=923.41, samples=19 00:21:29.243 lat (usec) : 100=1.70%, 250=97.81%, 500=0.05%, 750=0.02%, 1000=0.02% 00:21:29.243 lat (msec) : 2=0.11%, 4=0.26%, 10=0.03% 00:21:29.243 cpu : usr=1.73%, sys=4.72%, ctx=70768, majf=0, minf=797 00:21:29.243 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:29.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.243 issued rwts: total=0,70767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.243 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:29.243 00:21:29.243 Run status group 0 (all jobs): 00:21:29.243 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=276MiB (290MB), run=10001-10001msec 00:21:29.243 00:21:29.243 Disk stats (read/write): 00:21:29.243 ublkb0: ios=0/70008, merge=0/0, ticks=0/9279, in_queue=9280, util=99.09% 00:21:29.243 09:18:24 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:21:29.243 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.243 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:29.503 [2024-11-20 09:18:24.364818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:29.503 [2024-11-20 09:18:24.399749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:29.503 [2024-11-20 09:18:24.400811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:29.503 [2024-11-20 09:18:24.406740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:29.503 [2024-11-20 09:18:24.407177] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:29.503 [2024-11-20 09:18:24.407205] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.503 09:18:24 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:29.503 [2024-11-20 09:18:24.417979] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:21:29.503 request: 00:21:29.503 { 00:21:29.503 "ublk_id": 0, 00:21:29.503 "method": "ublk_stop_disk", 00:21:29.503 "req_id": 1 00:21:29.503 } 00:21:29.503 Got JSON-RPC error response 00:21:29.503 response: 00:21:29.503 { 00:21:29.503 "code": -19, 00:21:29.503 "message": "No such device" 00:21:29.503 } 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.503 09:18:24 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:29.503 [2024-11-20 09:18:24.436832] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:29.503 [2024-11-20 09:18:24.444752] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:29.503 [2024-11-20 09:18:24.444800] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.503 09:18:24 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.503 09:18:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.071 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.071 09:18:25 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:21:30.071 09:18:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:30.071 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.071 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.071 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.071 09:18:25 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:30.071 09:18:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:21:30.330 09:18:25 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:30.330 09:18:25 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:30.330 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.330 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.330 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.330 09:18:25 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:30.330 09:18:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:21:30.330 09:18:25 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:30.330 00:21:30.330 real 0m11.811s 00:21:30.330 user 0m0.634s 00:21:30.330 sys 0m0.576s 00:21:30.330 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.330 09:18:25 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.330 ************************************ 00:21:30.330 END TEST test_create_ublk 00:21:30.330 ************************************ 00:21:30.330 09:18:25 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:21:30.330 09:18:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:30.330 09:18:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.330 09:18:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.330 ************************************ 00:21:30.330 START TEST test_create_multi_ublk 00:21:30.330 ************************************ 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.330 [2024-11-20 09:18:25.338760] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:30.330 [2024-11-20 09:18:25.341683] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.330 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:30.589 [2024-11-20 09:18:25.653965] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:21:30.589 [2024-11-20 09:18:25.654668] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:21:30.589 [2024-11-20 09:18:25.654689] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:21:30.589 [2024-11-20 09:18:25.654707] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:21:30.589 [2024-11-20 09:18:25.662304] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:30.589 [2024-11-20 09:18:25.662348] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:30.589 [2024-11-20 09:18:25.668827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:30.589 [2024-11-20 09:18:25.669647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:21:30.589 [2024-11-20 09:18:25.684972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.589 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.158 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.158 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:21:31.158 09:18:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:21:31.158 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.158 09:18:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.158 [2024-11-20 09:18:25.999945] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:21:31.158 [2024-11-20 09:18:26.000593] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:21:31.158 [2024-11-20 09:18:26.000632] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:31.158 [2024-11-20 09:18:26.000643] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:31.158 [2024-11-20 09:18:26.007764] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:31.158 [2024-11-20 09:18:26.007788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:31.158 [2024-11-20 09:18:26.014798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:31.158 [2024-11-20 09:18:26.015691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:31.158 [2024-11-20 09:18:26.037837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:31.158 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.158 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:21:31.158 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:31.158 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:21:31.158 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.158 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.417 [2024-11-20 09:18:26.352925] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:21:31.417 [2024-11-20 09:18:26.353538] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:21:31.417 [2024-11-20 09:18:26.353577] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:21:31.417 [2024-11-20 09:18:26.353590] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:21:31.417 [2024-11-20 09:18:26.362285] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:31.417 [2024-11-20 09:18:26.362318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:31.417 [2024-11-20 09:18:26.368843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:31.417 [2024-11-20 09:18:26.369672] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:21:31.417 [2024-11-20 09:18:26.376315] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.417 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.676 [2024-11-20 09:18:26.685030] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:21:31.676 [2024-11-20 09:18:26.685559] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:21:31.676 [2024-11-20 09:18:26.685578] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:21:31.676 [2024-11-20 09:18:26.685587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:21:31.676 [2024-11-20 09:18:26.696733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:31.676 [2024-11-20 09:18:26.696757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:31.676 [2024-11-20 09:18:26.704760] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:31.676 [2024-11-20 09:18:26.705558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:21:31.676 [2024-11-20 09:18:26.711907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.676 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:31.677 09:18:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.677 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:21:31.677 { 00:21:31.677 "ublk_device": "/dev/ublkb0", 00:21:31.677 "id": 0, 00:21:31.677 "queue_depth": 512, 00:21:31.677 "num_queues": 4, 00:21:31.677 "bdev_name": "Malloc0" 00:21:31.677 }, 00:21:31.677 { 00:21:31.677 "ublk_device": "/dev/ublkb1", 00:21:31.677 "id": 1, 00:21:31.677 "queue_depth": 512, 00:21:31.677 "num_queues": 4, 00:21:31.677 "bdev_name": "Malloc1" 00:21:31.677 }, 00:21:31.677 { 00:21:31.677 "ublk_device": "/dev/ublkb2", 00:21:31.677 "id": 2, 00:21:31.677 "queue_depth": 512, 00:21:31.677 "num_queues": 4, 00:21:31.677 "bdev_name": "Malloc2" 00:21:31.677 }, 00:21:31.677 { 00:21:31.677 "ublk_device": "/dev/ublkb3", 00:21:31.677 "id": 3, 00:21:31.677 "queue_depth": 512, 00:21:31.677 "num_queues": 4, 00:21:31.677 "bdev_name": "Malloc3" 00:21:31.677 } 00:21:31.677 ]' 00:21:31.677 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:21:31.677 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:31.677 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:31.935 09:18:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:21:31.935 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:21:31.935 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:31.935 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:32.195 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:32.454 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:21:32.455 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:21:32.455 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:32.455 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.714 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:32.714 [2024-11-20 09:18:27.815284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:21:32.973 [2024-11-20 09:18:27.847570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:32.973 [2024-11-20 09:18:27.849091] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:21:32.973 [2024-11-20 09:18:27.854774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:32.973 [2024-11-20 09:18:27.855076] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:21:32.973 [2024-11-20 09:18:27.855093] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:21:32.973 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.973 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:32.973 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:21:32.973 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.973 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:32.973 [2024-11-20 09:18:27.870835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:32.973 [2024-11-20 09:18:27.902642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:32.973 [2024-11-20 09:18:27.904203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:32.974 [2024-11-20 09:18:27.909775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:32.974 [2024-11-20 09:18:27.910152] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:32.974 [2024-11-20 09:18:27.910177] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:32.974 [2024-11-20 09:18:27.925833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:21:32.974 [2024-11-20 09:18:27.972800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:32.974 [2024-11-20 09:18:27.973932] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:21:32.974 [2024-11-20 09:18:27.980857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:32.974 [2024-11-20 09:18:27.981293] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:21:32.974 [2024-11-20 09:18:27.981330] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.974 09:18:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:32.974 [2024-11-20 09:18:27.995805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:21:32.974 [2024-11-20 09:18:28.032790] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:32.974 [2024-11-20 09:18:28.033917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:21:32.974 [2024-11-20 09:18:28.042830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:32.974 [2024-11-20 09:18:28.043234] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:21:32.974 [2024-11-20 09:18:28.043256] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:21:32.974 09:18:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.974 09:18:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:21:33.233 [2024-11-20 09:18:28.340743] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:33.233 [2024-11-20 09:18:28.347842] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:33.233 [2024-11-20 09:18:28.347910] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:33.492 09:18:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:21:33.492 09:18:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:33.492 09:18:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:33.492 09:18:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.492 09:18:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:34.058 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.058 09:18:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:34.058 09:18:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:34.058 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.058 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:34.317 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.317 09:18:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:34.317 09:18:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:21:34.317 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.317 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:34.884 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.884 09:18:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:21:34.884 09:18:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:21:34.884 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.884 09:18:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:21:35.143 ************************************ 00:21:35.143 END TEST test_create_multi_ublk 00:21:35.143 ************************************ 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:21:35.143 00:21:35.143 real 0m4.902s 00:21:35.143 user 0m1.353s 00:21:35.143 sys 0m0.191s 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.143 09:18:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:21:35.400 09:18:30 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:35.400 09:18:30 ublk -- ublk/ublk.sh@147 -- # cleanup 00:21:35.400 09:18:30 ublk -- ublk/ublk.sh@130 -- # killprocess 75425 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@954 -- # '[' -z 75425 ']' 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@958 -- # kill -0 75425 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@959 -- # uname 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75425 00:21:35.400 killing process with pid 75425 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75425' 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@973 -- # kill 75425 00:21:35.400 09:18:30 ublk -- common/autotest_common.sh@978 -- # wait 75425 00:21:36.362 [2024-11-20 09:18:31.407413] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:36.362 [2024-11-20 09:18:31.407545] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:37.739 00:21:37.739 real 0m30.235s 00:21:37.739 user 0m43.881s 00:21:37.739 sys 0m10.056s 00:21:37.739 09:18:32 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:37.739 09:18:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:21:37.739 ************************************ 00:21:37.739 END TEST ublk 00:21:37.739 ************************************ 00:21:37.739 09:18:32 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:37.739 09:18:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:37.739 09:18:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.739 09:18:32 -- common/autotest_common.sh@10 -- # set +x 00:21:37.739 ************************************ 00:21:37.739 START TEST ublk_recovery 00:21:37.739 ************************************ 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:21:37.739 * Looking for test storage... 00:21:37.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.739 09:18:32 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:37.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.739 --rc genhtml_branch_coverage=1 00:21:37.739 --rc genhtml_function_coverage=1 00:21:37.739 --rc genhtml_legend=1 00:21:37.739 --rc geninfo_all_blocks=1 00:21:37.739 --rc geninfo_unexecuted_blocks=1 00:21:37.739 00:21:37.739 ' 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:37.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.739 --rc genhtml_branch_coverage=1 00:21:37.739 --rc genhtml_function_coverage=1 00:21:37.739 --rc genhtml_legend=1 00:21:37.739 --rc geninfo_all_blocks=1 00:21:37.739 --rc geninfo_unexecuted_blocks=1 00:21:37.739 00:21:37.739 ' 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:37.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.739 --rc genhtml_branch_coverage=1 00:21:37.739 --rc genhtml_function_coverage=1 00:21:37.739 --rc genhtml_legend=1 00:21:37.739 --rc geninfo_all_blocks=1 00:21:37.739 --rc geninfo_unexecuted_blocks=1 00:21:37.739 00:21:37.739 ' 00:21:37.739 09:18:32 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:37.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.739 --rc genhtml_branch_coverage=1 00:21:37.739 --rc genhtml_function_coverage=1 00:21:37.739 --rc genhtml_legend=1 00:21:37.739 --rc geninfo_all_blocks=1 00:21:37.739 --rc geninfo_unexecuted_blocks=1 00:21:37.739 00:21:37.739 ' 00:21:37.739 09:18:32 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:21:37.739 09:18:32 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:21:37.997 09:18:32 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:21:37.998 09:18:32 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:21:37.998 09:18:32 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75847 00:21:37.998 09:18:32 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:37.998 09:18:32 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75847 00:21:37.998 09:18:32 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:37.998 09:18:32 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75847 ']' 00:21:37.998 09:18:32 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.998 09:18:32 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.998 09:18:32 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.998 09:18:32 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.998 09:18:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.998 [2024-11-20 09:18:32.990836] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:37.998 [2024-11-20 09:18:32.991735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75847 ] 00:21:38.256 [2024-11-20 09:18:33.174712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:38.256 [2024-11-20 09:18:33.317284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.256 [2024-11-20 09:18:33.317291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:39.193 09:18:34 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.193 [2024-11-20 09:18:34.240789] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:39.193 [2024-11-20 09:18:34.243874] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.193 09:18:34 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.193 09:18:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.452 malloc0 00:21:39.452 09:18:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.452 09:18:34 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:21:39.452 09:18:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.452 09:18:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.452 [2024-11-20 09:18:34.404947] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:21:39.452 [2024-11-20 09:18:34.405099] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:21:39.452 [2024-11-20 09:18:34.405139] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:39.452 [2024-11-20 09:18:34.405153] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:21:39.452 [2024-11-20 09:18:34.415011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:21:39.452 [2024-11-20 09:18:34.415065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:21:39.452 [2024-11-20 09:18:34.422772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:21:39.452 [2024-11-20 09:18:34.422977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:21:39.452 [2024-11-20 09:18:34.439775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:21:39.452 1 00:21:39.452 09:18:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.452 09:18:34 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:21:40.389 09:18:35 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75888 00:21:40.389 09:18:35 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:21:40.389 09:18:35 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:21:40.648 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:40.648 fio-3.35 00:21:40.648 Starting 1 process 00:21:45.923 09:18:40 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75847 00:21:45.923 09:18:40 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:21:51.194 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75847 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:21:51.194 09:18:45 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:21:51.194 09:18:45 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75999 00:21:51.194 09:18:45 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.194 09:18:45 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75999 00:21:51.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.194 09:18:45 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75999 ']' 00:21:51.194 09:18:45 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.194 09:18:45 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.194 09:18:45 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.194 09:18:45 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.194 09:18:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.194 [2024-11-20 09:18:45.601224] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:51.194 [2024-11-20 09:18:45.601440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75999 ] 00:21:51.194 [2024-11-20 09:18:45.799550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:51.194 [2024-11-20 09:18:45.953287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.194 [2024-11-20 09:18:45.953292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:21:51.763 09:18:46 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.763 [2024-11-20 09:18:46.874788] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:21:51.763 [2024-11-20 09:18:46.878004] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.763 09:18:46 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.763 09:18:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.022 malloc0 00:21:52.022 09:18:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.022 09:18:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:21:52.022 09:18:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.022 09:18:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.022 [2024-11-20 09:18:47.034010] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:21:52.022 [2024-11-20 09:18:47.034108] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:21:52.022 [2024-11-20 09:18:47.034127] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:21:52.022 [2024-11-20 09:18:47.040852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:21:52.022 [2024-11-20 09:18:47.040884] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:21:52.022 [2024-11-20 09:18:47.040897] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:21:52.022 [2024-11-20 09:18:47.041009] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:21:52.022 1 00:21:52.022 09:18:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.022 09:18:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75888 00:21:52.022 [2024-11-20 09:18:47.048796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:21:52.022 [2024-11-20 09:18:47.056851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:21:52.022 [2024-11-20 09:18:47.063697] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:21:52.022 [2024-11-20 09:18:47.063727] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:22:48.261 00:22:48.261 fio_test: (groupid=0, jobs=1): err= 0: pid=75895: Wed Nov 20 09:19:35 2024 00:22:48.261 read: IOPS=16.8k, BW=65.7MiB/s (68.9MB/s)(3945MiB/60003msec) 00:22:48.261 slat (nsec): min=1883, max=594264, avg=6892.03, stdev=3859.55 00:22:48.261 clat (usec): min=1635, max=6620.6k, avg=3727.33, stdev=51828.83 00:22:48.261 lat (usec): min=1642, max=6620.6k, avg=3734.22, stdev=51828.84 00:22:48.261 clat percentiles (usec): 00:22:48.261 | 1.00th=[ 2671], 5.00th=[ 2835], 10.00th=[ 2900], 20.00th=[ 2999], 00:22:48.261 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:22:48.261 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3589], 95.00th=[ 4490], 00:22:48.261 | 99.00th=[ 6980], 99.50th=[ 7373], 99.90th=[ 8848], 99.95th=[10945], 00:22:48.261 | 99.99th=[13566] 00:22:48.261 bw ( KiB/s): min= 8296, max=84096, per=100.00%, avg=74881.00, stdev=10958.72, samples=107 00:22:48.261 iops : min= 2074, max=21024, avg=18720.25, stdev=2739.68, samples=107 00:22:48.261 write: IOPS=16.8k, BW=65.7MiB/s (68.8MB/s)(3939MiB/60003msec); 0 zone resets 00:22:48.261 slat (usec): min=2, max=702, avg= 7.19, stdev= 4.02 00:22:48.261 clat (usec): min=1715, max=6620.6k, avg=3867.36, stdev=53510.43 00:22:48.261 lat (usec): min=1723, max=6620.6k, avg=3874.56, stdev=53510.43 00:22:48.261 clat percentiles (usec): 00:22:48.261 | 1.00th=[ 2737], 5.00th=[ 2966], 10.00th=[ 3032], 20.00th=[ 3130], 00:22:48.261 | 30.00th=[ 3195], 40.00th=[ 3261], 50.00th=[ 3294], 60.00th=[ 3359], 00:22:48.261 | 70.00th=[ 3425], 80.00th=[ 3523], 90.00th=[ 3687], 95.00th=[ 4293], 00:22:48.261 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8717], 99.95th=[10552], 00:22:48.261 | 99.99th=[13566] 00:22:48.261 bw ( KiB/s): min= 8232, max=82624, per=100.00%, avg=74790.13, stdev=10844.88, samples=107 00:22:48.261 iops : min= 2058, max=20656, avg=18697.50, stdev=2711.21, samples=107 00:22:48.261 lat (msec) : 2=0.03%, 4=93.73%, 10=6.18%, 20=0.05%, >=2000=0.01% 00:22:48.261 cpu : usr=9.39%, sys=21.82%, ctx=63981, majf=0, minf=13 00:22:48.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:22:48.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:48.261 issued rwts: total=1009923,1008504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:48.261 00:22:48.261 Run status group 0 (all jobs): 00:22:48.261 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=3945MiB (4137MB), run=60003-60003msec 00:22:48.261 WRITE: bw=65.7MiB/s (68.8MB/s), 65.7MiB/s-65.7MiB/s (68.8MB/s-68.8MB/s), io=3939MiB (4131MB), run=60003-60003msec 00:22:48.261 00:22:48.261 Disk stats (read/write): 00:22:48.261 ublkb1: ios=1007715/1006379, merge=0/0, ticks=3662354/3679409, in_queue=7341763, util=99.95% 00:22:48.261 09:19:35 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 [2024-11-20 09:19:35.718888] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:22:48.261 [2024-11-20 09:19:35.761740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:22:48.261 [2024-11-20 09:19:35.762069] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:22:48.261 [2024-11-20 09:19:35.763157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:22:48.261 [2024-11-20 09:19:35.763502] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:22:48.261 [2024-11-20 09:19:35.763531] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.261 09:19:35 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 [2024-11-20 09:19:35.773955] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:48.261 [2024-11-20 09:19:35.780796] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:48.261 [2024-11-20 09:19:35.780841] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.261 09:19:35 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:22:48.261 09:19:35 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:22:48.261 09:19:35 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75999 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75999 ']' 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75999 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75999 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.261 killing process with pid 75999 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75999' 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75999 00:22:48.261 09:19:35 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75999 00:22:48.261 [2024-11-20 09:19:37.398811] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:22:48.261 [2024-11-20 09:19:37.398904] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:22:48.261 00:22:48.261 real 1m6.110s 00:22:48.261 user 1m46.461s 00:22:48.261 sys 0m33.555s 00:22:48.261 09:19:38 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.261 ************************************ 00:22:48.261 END TEST ublk_recovery 00:22:48.261 ************************************ 00:22:48.261 09:19:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 09:19:38 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:22:48.261 09:19:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:48.261 09:19:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.261 09:19:38 -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 09:19:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:22:48.261 09:19:38 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:48.261 09:19:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:48.261 09:19:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.261 09:19:38 -- common/autotest_common.sh@10 -- # set +x 00:22:48.261 ************************************ 00:22:48.261 START TEST ftl 00:22:48.261 ************************************ 00:22:48.261 09:19:38 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:48.261 * Looking for test storage... 00:22:48.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:48.261 09:19:38 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:48.261 09:19:38 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:22:48.261 09:19:38 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:48.261 09:19:39 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:48.261 09:19:39 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:48.261 09:19:39 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:22:48.261 09:19:39 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:22:48.261 09:19:39 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:22:48.261 09:19:39 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:48.261 09:19:39 ftl -- scripts/common.sh@344 -- # case "$op" in 00:22:48.261 09:19:39 ftl -- scripts/common.sh@345 -- # : 1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:48.261 09:19:39 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:48.261 09:19:39 ftl -- scripts/common.sh@365 -- # decimal 1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@353 -- # local d=1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:48.261 09:19:39 ftl -- scripts/common.sh@355 -- # echo 1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:22:48.261 09:19:39 ftl -- scripts/common.sh@366 -- # decimal 2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@353 -- # local d=2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:48.261 09:19:39 ftl -- scripts/common.sh@355 -- # echo 2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:22:48.261 09:19:39 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:48.261 09:19:39 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:48.262 09:19:39 ftl -- scripts/common.sh@368 -- # return 0 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.262 --rc genhtml_branch_coverage=1 00:22:48.262 --rc genhtml_function_coverage=1 00:22:48.262 --rc genhtml_legend=1 00:22:48.262 --rc geninfo_all_blocks=1 00:22:48.262 --rc geninfo_unexecuted_blocks=1 00:22:48.262 00:22:48.262 ' 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.262 --rc genhtml_branch_coverage=1 00:22:48.262 --rc genhtml_function_coverage=1 00:22:48.262 --rc genhtml_legend=1 00:22:48.262 --rc geninfo_all_blocks=1 00:22:48.262 --rc geninfo_unexecuted_blocks=1 00:22:48.262 00:22:48.262 ' 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.262 --rc genhtml_branch_coverage=1 00:22:48.262 --rc genhtml_function_coverage=1 00:22:48.262 --rc genhtml_legend=1 00:22:48.262 --rc geninfo_all_blocks=1 00:22:48.262 --rc geninfo_unexecuted_blocks=1 00:22:48.262 00:22:48.262 ' 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:48.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:48.262 --rc genhtml_branch_coverage=1 00:22:48.262 --rc genhtml_function_coverage=1 00:22:48.262 --rc genhtml_legend=1 00:22:48.262 --rc geninfo_all_blocks=1 00:22:48.262 --rc geninfo_unexecuted_blocks=1 00:22:48.262 00:22:48.262 ' 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:48.262 09:19:39 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:22:48.262 09:19:39 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:48.262 09:19:39 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:48.262 09:19:39 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:48.262 09:19:39 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:48.262 09:19:39 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.262 09:19:39 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:48.262 09:19:39 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:48.262 09:19:39 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:48.262 09:19:39 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:48.262 09:19:39 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:48.262 09:19:39 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:48.262 09:19:39 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:48.262 09:19:39 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:48.262 09:19:39 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:48.262 09:19:39 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:48.262 09:19:39 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:48.262 09:19:39 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:48.262 09:19:39 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:48.262 09:19:39 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:48.262 09:19:39 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:48.262 09:19:39 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:48.262 09:19:39 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:48.262 09:19:39 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:48.262 09:19:39 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:48.262 09:19:39 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:48.262 09:19:39 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:48.262 09:19:39 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:48.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:48.262 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:48.262 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:48.262 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:48.262 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76797 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:22:48.262 09:19:39 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76797 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@835 -- # '[' -z 76797 ']' 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.262 09:19:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 [2024-11-20 09:19:39.772155] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:48.262 [2024-11-20 09:19:39.772341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76797 ] 00:22:48.262 [2024-11-20 09:19:39.964340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.262 [2024-11-20 09:19:40.140040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.262 09:19:40 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.262 09:19:40 ftl -- common/autotest_common.sh@868 -- # return 0 00:22:48.262 09:19:40 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:22:48.262 09:19:41 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@50 -- # break 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:22:48.262 09:19:42 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:22:48.262 09:19:43 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:22:48.262 09:19:43 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:22:48.262 09:19:43 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:22:48.262 09:19:43 ftl -- ftl/ftl.sh@63 -- # break 00:22:48.262 09:19:43 ftl -- ftl/ftl.sh@66 -- # killprocess 76797 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@954 -- # '[' -z 76797 ']' 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@958 -- # kill -0 76797 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@959 -- # uname 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76797 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:48.262 killing process with pid 76797 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76797' 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@973 -- # kill 76797 00:22:48.262 09:19:43 ftl -- common/autotest_common.sh@978 -- # wait 76797 00:22:50.197 09:19:45 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:22:50.197 09:19:45 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:50.198 09:19:45 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:50.198 09:19:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.198 09:19:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:50.198 ************************************ 00:22:50.198 START TEST ftl_fio_basic 00:22:50.198 ************************************ 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:22:50.198 * Looking for test storage... 00:22:50.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:50.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.198 --rc genhtml_branch_coverage=1 00:22:50.198 --rc genhtml_function_coverage=1 00:22:50.198 --rc genhtml_legend=1 00:22:50.198 --rc geninfo_all_blocks=1 00:22:50.198 --rc geninfo_unexecuted_blocks=1 00:22:50.198 00:22:50.198 ' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:50.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.198 --rc genhtml_branch_coverage=1 00:22:50.198 --rc genhtml_function_coverage=1 00:22:50.198 --rc genhtml_legend=1 00:22:50.198 --rc geninfo_all_blocks=1 00:22:50.198 --rc geninfo_unexecuted_blocks=1 00:22:50.198 00:22:50.198 ' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:50.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.198 --rc genhtml_branch_coverage=1 00:22:50.198 --rc genhtml_function_coverage=1 00:22:50.198 --rc genhtml_legend=1 00:22:50.198 --rc geninfo_all_blocks=1 00:22:50.198 --rc geninfo_unexecuted_blocks=1 00:22:50.198 00:22:50.198 ' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:50.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.198 --rc genhtml_branch_coverage=1 00:22:50.198 --rc genhtml_function_coverage=1 00:22:50.198 --rc genhtml_legend=1 00:22:50.198 --rc geninfo_all_blocks=1 00:22:50.198 --rc geninfo_unexecuted_blocks=1 00:22:50.198 00:22:50.198 ' 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:50.198 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:50.457 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:50.457 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:50.457 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:50.457 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:50.457 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76940 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76940 00:22:50.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76940 ']' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:50.458 09:19:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:50.458 [2024-11-20 09:19:45.427563] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:50.458 [2024-11-20 09:19:45.428344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76940 ] 00:22:50.716 [2024-11-20 09:19:45.592487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:50.716 [2024-11-20 09:19:45.704867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.716 [2024-11-20 09:19:45.704934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.716 [2024-11-20 09:19:45.704953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:22:51.653 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:51.913 09:19:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:52.172 { 00:22:52.172 "name": "nvme0n1", 00:22:52.172 "aliases": [ 00:22:52.172 "c5a6e564-4468-41f2-8a5d-dbe693982003" 00:22:52.172 ], 00:22:52.172 "product_name": "NVMe disk", 00:22:52.172 "block_size": 4096, 00:22:52.172 "num_blocks": 1310720, 00:22:52.172 "uuid": "c5a6e564-4468-41f2-8a5d-dbe693982003", 00:22:52.172 "numa_id": -1, 00:22:52.172 "assigned_rate_limits": { 00:22:52.172 "rw_ios_per_sec": 0, 00:22:52.172 "rw_mbytes_per_sec": 0, 00:22:52.172 "r_mbytes_per_sec": 0, 00:22:52.172 "w_mbytes_per_sec": 0 00:22:52.172 }, 00:22:52.172 "claimed": false, 00:22:52.172 "zoned": false, 00:22:52.172 "supported_io_types": { 00:22:52.172 "read": true, 00:22:52.172 "write": true, 00:22:52.172 "unmap": true, 00:22:52.172 "flush": true, 00:22:52.172 "reset": true, 00:22:52.172 "nvme_admin": true, 00:22:52.172 "nvme_io": true, 00:22:52.172 "nvme_io_md": false, 00:22:52.172 "write_zeroes": true, 00:22:52.172 "zcopy": false, 00:22:52.172 "get_zone_info": false, 00:22:52.172 "zone_management": false, 00:22:52.172 "zone_append": false, 00:22:52.172 "compare": true, 00:22:52.172 "compare_and_write": false, 00:22:52.172 "abort": true, 00:22:52.172 "seek_hole": false, 00:22:52.172 "seek_data": false, 00:22:52.172 "copy": true, 00:22:52.172 "nvme_iov_md": false 00:22:52.172 }, 00:22:52.172 "driver_specific": { 00:22:52.172 "nvme": [ 00:22:52.172 { 00:22:52.172 "pci_address": "0000:00:11.0", 00:22:52.172 "trid": { 00:22:52.172 "trtype": "PCIe", 00:22:52.172 "traddr": "0000:00:11.0" 00:22:52.172 }, 00:22:52.172 "ctrlr_data": { 00:22:52.172 "cntlid": 0, 00:22:52.172 "vendor_id": "0x1b36", 00:22:52.172 "model_number": "QEMU NVMe Ctrl", 00:22:52.172 "serial_number": "12341", 00:22:52.172 "firmware_revision": "8.0.0", 00:22:52.172 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:52.172 "oacs": { 00:22:52.172 "security": 0, 00:22:52.172 "format": 1, 00:22:52.172 "firmware": 0, 00:22:52.172 "ns_manage": 1 00:22:52.172 }, 00:22:52.172 "multi_ctrlr": false, 00:22:52.172 "ana_reporting": false 00:22:52.172 }, 00:22:52.172 "vs": { 00:22:52.172 "nvme_version": "1.4" 00:22:52.172 }, 00:22:52.172 "ns_data": { 00:22:52.172 "id": 1, 00:22:52.172 "can_share": false 00:22:52.172 } 00:22:52.172 } 00:22:52.172 ], 00:22:52.172 "mp_policy": "active_passive" 00:22:52.172 } 00:22:52.172 } 00:22:52.172 ]' 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:52.172 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:52.431 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:22:52.431 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:52.690 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2 00:22:52.690 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:52.948 09:19:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.207 { 00:22:53.207 "name": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:53.207 "aliases": [ 00:22:53.207 "lvs/nvme0n1p0" 00:22:53.207 ], 00:22:53.207 "product_name": "Logical Volume", 00:22:53.207 "block_size": 4096, 00:22:53.207 "num_blocks": 26476544, 00:22:53.207 "uuid": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:53.207 "assigned_rate_limits": { 00:22:53.207 "rw_ios_per_sec": 0, 00:22:53.207 "rw_mbytes_per_sec": 0, 00:22:53.207 "r_mbytes_per_sec": 0, 00:22:53.207 "w_mbytes_per_sec": 0 00:22:53.207 }, 00:22:53.207 "claimed": false, 00:22:53.207 "zoned": false, 00:22:53.207 "supported_io_types": { 00:22:53.207 "read": true, 00:22:53.207 "write": true, 00:22:53.207 "unmap": true, 00:22:53.207 "flush": false, 00:22:53.207 "reset": true, 00:22:53.207 "nvme_admin": false, 00:22:53.207 "nvme_io": false, 00:22:53.207 "nvme_io_md": false, 00:22:53.207 "write_zeroes": true, 00:22:53.207 "zcopy": false, 00:22:53.207 "get_zone_info": false, 00:22:53.207 "zone_management": false, 00:22:53.207 "zone_append": false, 00:22:53.207 "compare": false, 00:22:53.207 "compare_and_write": false, 00:22:53.207 "abort": false, 00:22:53.207 "seek_hole": true, 00:22:53.207 "seek_data": true, 00:22:53.207 "copy": false, 00:22:53.207 "nvme_iov_md": false 00:22:53.207 }, 00:22:53.207 "driver_specific": { 00:22:53.207 "lvol": { 00:22:53.207 "lvol_store_uuid": "08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2", 00:22:53.207 "base_bdev": "nvme0n1", 00:22:53.207 "thin_provision": true, 00:22:53.207 "num_allocated_clusters": 0, 00:22:53.207 "snapshot": false, 00:22:53.207 "clone": false, 00:22:53.207 "esnap_clone": false 00:22:53.207 } 00:22:53.207 } 00:22:53.207 } 00:22:53.207 ]' 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:22:53.207 09:19:48 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:53.775 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:54.034 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:54.034 { 00:22:54.034 "name": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:54.034 "aliases": [ 00:22:54.034 "lvs/nvme0n1p0" 00:22:54.034 ], 00:22:54.034 "product_name": "Logical Volume", 00:22:54.034 "block_size": 4096, 00:22:54.034 "num_blocks": 26476544, 00:22:54.034 "uuid": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:54.034 "assigned_rate_limits": { 00:22:54.034 "rw_ios_per_sec": 0, 00:22:54.034 "rw_mbytes_per_sec": 0, 00:22:54.034 "r_mbytes_per_sec": 0, 00:22:54.034 "w_mbytes_per_sec": 0 00:22:54.034 }, 00:22:54.034 "claimed": false, 00:22:54.034 "zoned": false, 00:22:54.034 "supported_io_types": { 00:22:54.034 "read": true, 00:22:54.034 "write": true, 00:22:54.034 "unmap": true, 00:22:54.034 "flush": false, 00:22:54.034 "reset": true, 00:22:54.034 "nvme_admin": false, 00:22:54.034 "nvme_io": false, 00:22:54.034 "nvme_io_md": false, 00:22:54.034 "write_zeroes": true, 00:22:54.034 "zcopy": false, 00:22:54.034 "get_zone_info": false, 00:22:54.034 "zone_management": false, 00:22:54.034 "zone_append": false, 00:22:54.034 "compare": false, 00:22:54.034 "compare_and_write": false, 00:22:54.034 "abort": false, 00:22:54.034 "seek_hole": true, 00:22:54.034 "seek_data": true, 00:22:54.034 "copy": false, 00:22:54.034 "nvme_iov_md": false 00:22:54.034 }, 00:22:54.034 "driver_specific": { 00:22:54.034 "lvol": { 00:22:54.034 "lvol_store_uuid": "08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2", 00:22:54.034 "base_bdev": "nvme0n1", 00:22:54.034 "thin_provision": true, 00:22:54.034 "num_allocated_clusters": 0, 00:22:54.034 "snapshot": false, 00:22:54.034 "clone": false, 00:22:54.034 "esnap_clone": false 00:22:54.034 } 00:22:54.034 } 00:22:54.034 } 00:22:54.034 ]' 00:22:54.034 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:54.034 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:54.034 09:19:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:54.034 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:54.034 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:54.034 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:54.034 09:19:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:22:54.034 09:19:49 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:22:54.292 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:22:54.292 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:54.550 { 00:22:54.550 "name": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:54.550 "aliases": [ 00:22:54.550 "lvs/nvme0n1p0" 00:22:54.550 ], 00:22:54.550 "product_name": "Logical Volume", 00:22:54.550 "block_size": 4096, 00:22:54.550 "num_blocks": 26476544, 00:22:54.550 "uuid": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:54.550 "assigned_rate_limits": { 00:22:54.550 "rw_ios_per_sec": 0, 00:22:54.550 "rw_mbytes_per_sec": 0, 00:22:54.550 "r_mbytes_per_sec": 0, 00:22:54.550 "w_mbytes_per_sec": 0 00:22:54.550 }, 00:22:54.550 "claimed": false, 00:22:54.550 "zoned": false, 00:22:54.550 "supported_io_types": { 00:22:54.550 "read": true, 00:22:54.550 "write": true, 00:22:54.550 "unmap": true, 00:22:54.550 "flush": false, 00:22:54.550 "reset": true, 00:22:54.550 "nvme_admin": false, 00:22:54.550 "nvme_io": false, 00:22:54.550 "nvme_io_md": false, 00:22:54.550 "write_zeroes": true, 00:22:54.550 "zcopy": false, 00:22:54.550 "get_zone_info": false, 00:22:54.550 "zone_management": false, 00:22:54.550 "zone_append": false, 00:22:54.550 "compare": false, 00:22:54.550 "compare_and_write": false, 00:22:54.550 "abort": false, 00:22:54.550 "seek_hole": true, 00:22:54.550 "seek_data": true, 00:22:54.550 "copy": false, 00:22:54.550 "nvme_iov_md": false 00:22:54.550 }, 00:22:54.550 "driver_specific": { 00:22:54.550 "lvol": { 00:22:54.550 "lvol_store_uuid": "08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2", 00:22:54.550 "base_bdev": "nvme0n1", 00:22:54.550 "thin_provision": true, 00:22:54.550 "num_allocated_clusters": 0, 00:22:54.550 "snapshot": false, 00:22:54.550 "clone": false, 00:22:54.550 "esnap_clone": false 00:22:54.550 } 00:22:54.550 } 00:22:54.550 } 00:22:54.550 ]' 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:22:54.550 09:19:49 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ec2aae7-19bd-4439-b15a-f61ab6a13e06 -c nvc0n1p0 --l2p_dram_limit 60 00:22:54.810 [2024-11-20 09:19:49.768076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.768131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:54.810 [2024-11-20 09:19:49.768171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:54.810 [2024-11-20 09:19:49.768182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.768269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.768287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.810 [2024-11-20 09:19:49.768302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:54.810 [2024-11-20 09:19:49.768312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.768361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:54.810 [2024-11-20 09:19:49.769397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:54.810 [2024-11-20 09:19:49.769434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.769446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.810 [2024-11-20 09:19:49.769460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:22:54.810 [2024-11-20 09:19:49.769470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.769609] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dd76072d-7272-4c91-8246-a71687d8d77e 00:22:54.810 [2024-11-20 09:19:49.771562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.771625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:54.810 [2024-11-20 09:19:49.771641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:54.810 [2024-11-20 09:19:49.771653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.781270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.781334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.810 [2024-11-20 09:19:49.781350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.475 ms 00:22:54.810 [2024-11-20 09:19:49.781363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.781523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.781564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.810 [2024-11-20 09:19:49.781576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:54.810 [2024-11-20 09:19:49.781593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.781652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.810 [2024-11-20 09:19:49.781702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:54.810 [2024-11-20 09:19:49.781716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:54.810 [2024-11-20 09:19:49.781729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.810 [2024-11-20 09:19:49.781790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:54.810 [2024-11-20 09:19:49.786882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.811 [2024-11-20 09:19:49.786934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.811 [2024-11-20 09:19:49.786968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.096 ms 00:22:54.811 [2024-11-20 09:19:49.786982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.811 [2024-11-20 09:19:49.787071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.811 [2024-11-20 09:19:49.787092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:54.811 [2024-11-20 09:19:49.787106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:54.811 [2024-11-20 09:19:49.787117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.811 [2024-11-20 09:19:49.787234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:54.811 [2024-11-20 09:19:49.787400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:54.811 [2024-11-20 09:19:49.787426] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:54.811 [2024-11-20 09:19:49.787459] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:54.811 [2024-11-20 09:19:49.787477] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:54.811 [2024-11-20 09:19:49.787490] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:54.811 [2024-11-20 09:19:49.787505] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:54.811 [2024-11-20 09:19:49.787516] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:54.811 [2024-11-20 09:19:49.787529] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:54.811 [2024-11-20 09:19:49.787540] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:54.811 [2024-11-20 09:19:49.787556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.811 [2024-11-20 09:19:49.787570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:54.811 [2024-11-20 09:19:49.787591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:54.811 [2024-11-20 09:19:49.787602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.811 [2024-11-20 09:19:49.787757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.811 [2024-11-20 09:19:49.787780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:54.811 [2024-11-20 09:19:49.787795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:54.811 [2024-11-20 09:19:49.787806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.811 [2024-11-20 09:19:49.787944] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:54.811 [2024-11-20 09:19:49.787959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:54.811 [2024-11-20 09:19:49.787977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:54.811 [2024-11-20 09:19:49.788042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:54.811 [2024-11-20 09:19:49.788077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.811 [2024-11-20 09:19:49.788098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:54.811 [2024-11-20 09:19:49.788108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:54.811 [2024-11-20 09:19:49.788121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.811 [2024-11-20 09:19:49.788131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:54.811 [2024-11-20 09:19:49.788145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:54.811 [2024-11-20 09:19:49.788155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:54.811 [2024-11-20 09:19:49.788180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:54.811 [2024-11-20 09:19:49.788221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:54.811 [2024-11-20 09:19:49.788252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:54.811 [2024-11-20 09:19:49.788291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:54.811 [2024-11-20 09:19:49.788324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:54.811 [2024-11-20 09:19:49.788361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.811 [2024-11-20 09:19:49.788383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:54.811 [2024-11-20 09:19:49.788412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:54.811 [2024-11-20 09:19:49.788426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.811 [2024-11-20 09:19:49.788436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:54.811 [2024-11-20 09:19:49.788450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:54.811 [2024-11-20 09:19:49.788460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:54.811 [2024-11-20 09:19:49.788481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:54.811 [2024-11-20 09:19:49.788492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:54.811 [2024-11-20 09:19:49.788516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:54.811 [2024-11-20 09:19:49.788526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.811 [2024-11-20 09:19:49.788549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:54.811 [2024-11-20 09:19:49.788564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:54.811 [2024-11-20 09:19:49.788574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:54.811 [2024-11-20 09:19:49.788586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:54.811 [2024-11-20 09:19:49.788595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:54.811 [2024-11-20 09:19:49.788607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:54.811 [2024-11-20 09:19:49.788622] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:54.811 [2024-11-20 09:19:49.788638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.811 [2024-11-20 09:19:49.788677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:54.811 [2024-11-20 09:19:49.788692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:54.812 [2024-11-20 09:19:49.788703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:54.812 [2024-11-20 09:19:49.788721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:54.812 [2024-11-20 09:19:49.788732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:54.812 [2024-11-20 09:19:49.788745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:54.812 [2024-11-20 09:19:49.788756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:54.812 [2024-11-20 09:19:49.788769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:54.812 [2024-11-20 09:19:49.788779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:54.812 [2024-11-20 09:19:49.788797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:54.812 [2024-11-20 09:19:49.788808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:54.812 [2024-11-20 09:19:49.788821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:54.812 [2024-11-20 09:19:49.788832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:54.812 [2024-11-20 09:19:49.788845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:54.812 [2024-11-20 09:19:49.788856] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:54.812 [2024-11-20 09:19:49.788870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.812 [2024-11-20 09:19:49.788885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:54.812 [2024-11-20 09:19:49.788898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:54.812 [2024-11-20 09:19:49.788910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:54.812 [2024-11-20 09:19:49.788923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:54.812 [2024-11-20 09:19:49.788936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.812 [2024-11-20 09:19:49.788949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:54.812 [2024-11-20 09:19:49.788961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:22:54.812 [2024-11-20 09:19:49.788973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.812 [2024-11-20 09:19:49.789067] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:54.812 [2024-11-20 09:19:49.789088] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:58.100 [2024-11-20 09:19:52.881911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.100 [2024-11-20 09:19:52.881975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:58.100 [2024-11-20 09:19:52.882030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3092.859 ms 00:22:58.100 [2024-11-20 09:19:52.882060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.100 [2024-11-20 09:19:52.919037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.100 [2024-11-20 09:19:52.919112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.100 [2024-11-20 09:19:52.919131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.658 ms 00:22:58.101 [2024-11-20 09:19:52.919145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:52.919324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:52.919353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.101 [2024-11-20 09:19:52.919367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:58.101 [2024-11-20 09:19:52.919382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:52.970315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:52.970604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.101 [2024-11-20 09:19:52.970680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.843 ms 00:22:58.101 [2024-11-20 09:19:52.970704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:52.970773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:52.970798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.101 [2024-11-20 09:19:52.970815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:58.101 [2024-11-20 09:19:52.970832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:52.971529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:52.971557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.101 [2024-11-20 09:19:52.971574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:22:58.101 [2024-11-20 09:19:52.971595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:52.971822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:52.971850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.101 [2024-11-20 09:19:52.971868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:22:58.101 [2024-11-20 09:19:52.971887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:52.994041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:52.994083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.101 [2024-11-20 09:19:52.994133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.113 ms 00:22:58.101 [2024-11-20 09:19:52.994145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:53.007049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:58.101 [2024-11-20 09:19:53.027199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:53.027541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.101 [2024-11-20 09:19:53.027587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.938 ms 00:22:58.101 [2024-11-20 09:19:53.027604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:53.092672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:53.093006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:58.101 [2024-11-20 09:19:53.093047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.978 ms 00:22:58.101 [2024-11-20 09:19:53.093059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:53.093336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:53.093357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.101 [2024-11-20 09:19:53.093375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:22:58.101 [2024-11-20 09:19:53.093386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:53.121626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:53.121712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:58.101 [2024-11-20 09:19:53.121733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.154 ms 00:22:58.101 [2024-11-20 09:19:53.121744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:53.150613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:53.150695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:58.101 [2024-11-20 09:19:53.150716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.807 ms 00:22:58.101 [2024-11-20 09:19:53.150727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.101 [2024-11-20 09:19:53.151629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.101 [2024-11-20 09:19:53.151731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.101 [2024-11-20 09:19:53.151752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:22:58.101 [2024-11-20 09:19:53.151763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.359 [2024-11-20 09:19:53.239641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.359 [2024-11-20 09:19:53.239890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:58.359 [2024-11-20 09:19:53.239930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.800 ms 00:22:58.359 [2024-11-20 09:19:53.239947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.359 [2024-11-20 09:19:53.269955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.359 [2024-11-20 09:19:53.270171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:58.359 [2024-11-20 09:19:53.270205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.871 ms 00:22:58.359 [2024-11-20 09:19:53.270218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.359 [2024-11-20 09:19:53.297520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.359 [2024-11-20 09:19:53.297572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:58.359 [2024-11-20 09:19:53.297591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.243 ms 00:22:58.359 [2024-11-20 09:19:53.297602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.360 [2024-11-20 09:19:53.324369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.360 [2024-11-20 09:19:53.324407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.360 [2024-11-20 09:19:53.324442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.691 ms 00:22:58.360 [2024-11-20 09:19:53.324453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.360 [2024-11-20 09:19:53.324511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.360 [2024-11-20 09:19:53.324527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.360 [2024-11-20 09:19:53.324544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:58.360 [2024-11-20 09:19:53.324558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.360 [2024-11-20 09:19:53.324819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.360 [2024-11-20 09:19:53.324844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.360 [2024-11-20 09:19:53.324875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:58.360 [2024-11-20 09:19:53.324901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.360 [2024-11-20 09:19:53.326596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3557.795 ms, result 0 00:22:58.360 { 00:22:58.360 "name": "ftl0", 00:22:58.360 "uuid": "dd76072d-7272-4c91-8246-a71687d8d77e" 00:22:58.360 } 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:58.360 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:58.618 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:58.877 [ 00:22:58.877 { 00:22:58.877 "name": "ftl0", 00:22:58.877 "aliases": [ 00:22:58.877 "dd76072d-7272-4c91-8246-a71687d8d77e" 00:22:58.877 ], 00:22:58.877 "product_name": "FTL disk", 00:22:58.877 "block_size": 4096, 00:22:58.877 "num_blocks": 20971520, 00:22:58.877 "uuid": "dd76072d-7272-4c91-8246-a71687d8d77e", 00:22:58.877 "assigned_rate_limits": { 00:22:58.877 "rw_ios_per_sec": 0, 00:22:58.877 "rw_mbytes_per_sec": 0, 00:22:58.877 "r_mbytes_per_sec": 0, 00:22:58.877 "w_mbytes_per_sec": 0 00:22:58.877 }, 00:22:58.877 "claimed": false, 00:22:58.877 "zoned": false, 00:22:58.877 "supported_io_types": { 00:22:58.877 "read": true, 00:22:58.878 "write": true, 00:22:58.878 "unmap": true, 00:22:58.878 "flush": true, 00:22:58.878 "reset": false, 00:22:58.878 "nvme_admin": false, 00:22:58.878 "nvme_io": false, 00:22:58.878 "nvme_io_md": false, 00:22:58.878 "write_zeroes": true, 00:22:58.878 "zcopy": false, 00:22:58.878 "get_zone_info": false, 00:22:58.878 "zone_management": false, 00:22:58.878 "zone_append": false, 00:22:58.878 "compare": false, 00:22:58.878 "compare_and_write": false, 00:22:58.878 "abort": false, 00:22:58.878 "seek_hole": false, 00:22:58.878 "seek_data": false, 00:22:58.878 "copy": false, 00:22:58.878 "nvme_iov_md": false 00:22:58.878 }, 00:22:58.878 "driver_specific": { 00:22:58.878 "ftl": { 00:22:58.878 "base_bdev": "4ec2aae7-19bd-4439-b15a-f61ab6a13e06", 00:22:58.878 "cache": "nvc0n1p0" 00:22:58.878 } 00:22:58.878 } 00:22:58.878 } 00:22:58.878 ] 00:22:58.878 09:19:53 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:22:58.878 09:19:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:22:58.878 09:19:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:59.136 09:19:54 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:22:59.136 09:19:54 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:59.395 [2024-11-20 09:19:54.411107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.395 [2024-11-20 09:19:54.411182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:59.395 [2024-11-20 09:19:54.411203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:59.395 [2024-11-20 09:19:54.411218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.395 [2024-11-20 09:19:54.411263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:59.395 [2024-11-20 09:19:54.414949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.395 [2024-11-20 09:19:54.414983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:59.395 [2024-11-20 09:19:54.415000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:22:59.395 [2024-11-20 09:19:54.415011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.395 [2024-11-20 09:19:54.415517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.395 [2024-11-20 09:19:54.415543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:59.395 [2024-11-20 09:19:54.415558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:22:59.395 [2024-11-20 09:19:54.415569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.395 [2024-11-20 09:19:54.418750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.395 [2024-11-20 09:19:54.418783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:59.396 [2024-11-20 09:19:54.418799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:22:59.396 [2024-11-20 09:19:54.418809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.396 [2024-11-20 09:19:54.425075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.396 [2024-11-20 09:19:54.425103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:59.396 [2024-11-20 09:19:54.425134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:22:59.396 [2024-11-20 09:19:54.425145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.396 [2024-11-20 09:19:54.455507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.396 [2024-11-20 09:19:54.455546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:59.396 [2024-11-20 09:19:54.455581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.291 ms 00:22:59.396 [2024-11-20 09:19:54.455592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.396 [2024-11-20 09:19:54.472821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.396 [2024-11-20 09:19:54.472874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:59.396 [2024-11-20 09:19:54.472912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.125 ms 00:22:59.396 [2024-11-20 09:19:54.472934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.396 [2024-11-20 09:19:54.473138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.396 [2024-11-20 09:19:54.473158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:59.396 [2024-11-20 09:19:54.473172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:22:59.396 [2024-11-20 09:19:54.473183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.396 [2024-11-20 09:19:54.500769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.396 [2024-11-20 09:19:54.500808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:59.396 [2024-11-20 09:19:54.500842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.550 ms 00:22:59.396 [2024-11-20 09:19:54.500852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.656 [2024-11-20 09:19:54.527007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.656 [2024-11-20 09:19:54.527044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:59.656 [2024-11-20 09:19:54.527077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.103 ms 00:22:59.656 [2024-11-20 09:19:54.527087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.656 [2024-11-20 09:19:54.553056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.656 [2024-11-20 09:19:54.553094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:59.656 [2024-11-20 09:19:54.553134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.913 ms 00:22:59.656 [2024-11-20 09:19:54.553146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.656 [2024-11-20 09:19:54.579883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.656 [2024-11-20 09:19:54.579930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:59.656 [2024-11-20 09:19:54.579965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.591 ms 00:22:59.656 [2024-11-20 09:19:54.579975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.656 [2024-11-20 09:19:54.580026] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:59.656 [2024-11-20 09:19:54.580046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:59.656 [2024-11-20 09:19:54.580669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.580998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:59.657 [2024-11-20 09:19:54.581482] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:59.657 [2024-11-20 09:19:54.581495] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dd76072d-7272-4c91-8246-a71687d8d77e 00:22:59.657 [2024-11-20 09:19:54.581507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:59.657 [2024-11-20 09:19:54.581522] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:59.657 [2024-11-20 09:19:54.581532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:59.657 [2024-11-20 09:19:54.581551] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:59.657 [2024-11-20 09:19:54.581562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:59.657 [2024-11-20 09:19:54.581575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:59.657 [2024-11-20 09:19:54.581586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:59.657 [2024-11-20 09:19:54.581597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:59.657 [2024-11-20 09:19:54.581606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:59.657 [2024-11-20 09:19:54.581619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.657 [2024-11-20 09:19:54.581630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:59.657 [2024-11-20 09:19:54.581659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:22:59.657 [2024-11-20 09:19:54.581670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.597325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.657 [2024-11-20 09:19:54.597513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:59.657 [2024-11-20 09:19:54.597636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.559 ms 00:22:59.657 [2024-11-20 09:19:54.597782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.598309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:59.657 [2024-11-20 09:19:54.598466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:59.657 [2024-11-20 09:19:54.598569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:22:59.657 [2024-11-20 09:19:54.598696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.652164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.657 [2024-11-20 09:19:54.652397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:59.657 [2024-11-20 09:19:54.652510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.657 [2024-11-20 09:19:54.652622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.652766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.657 [2024-11-20 09:19:54.652814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:59.657 [2024-11-20 09:19:54.652916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.657 [2024-11-20 09:19:54.652962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.653121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.657 [2024-11-20 09:19:54.653227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:59.657 [2024-11-20 09:19:54.653341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.657 [2024-11-20 09:19:54.653388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.653535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.657 [2024-11-20 09:19:54.653559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:59.657 [2024-11-20 09:19:54.653574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.657 [2024-11-20 09:19:54.653584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.657 [2024-11-20 09:19:54.752592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.657 [2024-11-20 09:19:54.752698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:59.657 [2024-11-20 09:19:54.752729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.657 [2024-11-20 09:19:54.752740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.827726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.827969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:59.916 [2024-11-20 09:19:54.828002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.828129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.828147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:59.916 [2024-11-20 09:19:54.828161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.828284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.828301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:59.916 [2024-11-20 09:19:54.828315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.828487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.828507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:59.916 [2024-11-20 09:19:54.828521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.828605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.828621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:59.916 [2024-11-20 09:19:54.828650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.828739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.828753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:59.916 [2024-11-20 09:19:54.828782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.828862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:59.916 [2024-11-20 09:19:54.828877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:59.916 [2024-11-20 09:19:54.828890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:59.916 [2024-11-20 09:19:54.828900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:59.916 [2024-11-20 09:19:54.829132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 417.996 ms, result 0 00:22:59.916 true 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76940 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76940 ']' 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76940 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76940 00:22:59.916 killing process with pid 76940 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.916 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76940' 00:22:59.917 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76940 00:22:59.917 09:19:54 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76940 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:04.132 09:19:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:23:04.390 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:23:04.390 fio-3.35 00:23:04.390 Starting 1 thread 00:23:10.951 00:23:10.951 test: (groupid=0, jobs=1): err= 0: pid=77148: Wed Nov 20 09:20:04 2024 00:23:10.951 read: IOPS=905, BW=60.1MiB/s (63.1MB/s)(255MiB/4233msec) 00:23:10.951 slat (nsec): min=5066, max=37426, avg=6950.63, stdev=3433.67 00:23:10.951 clat (usec): min=344, max=1825, avg=493.04, stdev=57.73 00:23:10.951 lat (usec): min=350, max=1831, avg=499.99, stdev=58.42 00:23:10.951 clat percentiles (usec): 00:23:10.951 | 1.00th=[ 400], 5.00th=[ 429], 10.00th=[ 441], 20.00th=[ 453], 00:23:10.951 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[ 494], 00:23:10.951 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 594], 00:23:10.951 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 898], 99.95th=[ 971], 00:23:10.951 | 99.99th=[ 1827] 00:23:10.951 write: IOPS=911, BW=60.5MiB/s (63.5MB/s)(256MiB/4229msec); 0 zone resets 00:23:10.951 slat (nsec): min=17215, max=77975, avg=22967.26, stdev=6470.59 00:23:10.951 clat (usec): min=384, max=8098, avg=565.02, stdev=139.00 00:23:10.951 lat (usec): min=404, max=8121, avg=587.99, stdev=139.48 00:23:10.951 clat percentiles (usec): 00:23:10.951 | 1.00th=[ 453], 5.00th=[ 486], 10.00th=[ 506], 20.00th=[ 519], 00:23:10.951 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 562], 00:23:10.951 | 70.00th=[ 578], 80.00th=[ 603], 90.00th=[ 635], 95.00th=[ 660], 00:23:10.951 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 979], 99.95th=[ 1991], 00:23:10.951 | 99.99th=[ 8094] 00:23:10.951 bw ( KiB/s): min=58072, max=63512, per=99.69%, avg=61812.00, stdev=1662.48, samples=8 00:23:10.951 iops : min= 854, max= 934, avg=909.00, stdev=24.45, samples=8 00:23:10.951 lat (usec) : 500=36.34%, 750=62.54%, 1000=1.07% 00:23:10.951 lat (msec) : 2=0.04%, 10=0.01% 00:23:10.951 cpu : usr=99.10%, sys=0.19%, ctx=9, majf=0, minf=1169 00:23:10.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:10.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:10.951 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:10.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:10.951 00:23:10.951 Run status group 0 (all jobs): 00:23:10.951 READ: bw=60.1MiB/s (63.1MB/s), 60.1MiB/s-60.1MiB/s (63.1MB/s-63.1MB/s), io=255MiB (267MB), run=4233-4233msec 00:23:10.951 WRITE: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=256MiB (269MB), run=4229-4229msec 00:23:11.209 ----------------------------------------------------- 00:23:11.209 Suppressions used: 00:23:11.209 count bytes template 00:23:11.209 1 5 /usr/src/fio/parse.c 00:23:11.209 1 8 libtcmalloc_minimal.so 00:23:11.209 1 904 libcrypto.so 00:23:11.209 ----------------------------------------------------- 00:23:11.209 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:11.467 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:11.468 09:20:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:23:11.726 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:11.726 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:11.726 fio-3.35 00:23:11.726 Starting 2 threads 00:23:43.799 00:23:43.799 first_half: (groupid=0, jobs=1): err= 0: pid=77251: Wed Nov 20 09:20:37 2024 00:23:43.799 read: IOPS=2251, BW=9007KiB/s (9223kB/s)(255MiB/29040msec) 00:23:43.799 slat (usec): min=4, max=381, avg= 8.52, stdev= 3.44 00:23:43.799 clat (usec): min=1046, max=316350, avg=45115.35, stdev=22776.06 00:23:43.799 lat (usec): min=1054, max=316356, avg=45123.87, stdev=22776.35 00:23:43.799 clat percentiles (msec): 00:23:43.799 | 1.00th=[ 23], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:23:43.799 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:23:43.799 | 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 51], 95.00th=[ 59], 00:23:43.799 | 99.00th=[ 178], 99.50th=[ 203], 99.90th=[ 275], 99.95th=[ 292], 00:23:43.799 | 99.99th=[ 309] 00:23:43.799 write: IOPS=2404, BW=9619KiB/s (9850kB/s)(256MiB/27252msec); 0 zone resets 00:23:43.799 slat (usec): min=4, max=277, avg=10.08, stdev= 6.82 00:23:43.799 clat (usec): min=463, max=108310, avg=11659.38, stdev=19605.40 00:23:43.799 lat (usec): min=483, max=108321, avg=11669.46, stdev=19605.89 00:23:43.799 clat percentiles (usec): 00:23:43.799 | 1.00th=[ 1057], 5.00th=[ 1385], 10.00th=[ 1680], 20.00th=[ 2769], 00:23:43.799 | 30.00th=[ 4113], 40.00th=[ 5604], 50.00th=[ 6456], 60.00th=[ 7308], 00:23:43.799 | 70.00th=[ 8160], 80.00th=[ 11338], 90.00th=[ 15926], 95.00th=[ 79168], 00:23:43.799 | 99.00th=[ 95945], 99.50th=[ 99091], 99.90th=[105382], 99.95th=[105382], 00:23:43.799 | 99.99th=[107480] 00:23:43.799 bw ( KiB/s): min= 408, max=41720, per=100.00%, avg=20164.92, stdev=12897.96, samples=26 00:23:43.799 iops : min= 102, max=10430, avg=5041.23, stdev=3224.49, samples=26 00:23:43.799 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.28% 00:23:43.799 lat (msec) : 2=7.09%, 4=7.29%, 10=24.62%, 20=7.34%, 50=45.23% 00:23:43.799 lat (msec) : 100=6.55%, 250=1.45%, 500=0.11% 00:23:43.799 cpu : usr=98.77%, sys=0.33%, ctx=62, majf=0, minf=5607 00:23:43.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:43.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.799 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:43.799 issued rwts: total=65392,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:43.799 second_half: (groupid=0, jobs=1): err= 0: pid=77252: Wed Nov 20 09:20:37 2024 00:23:43.799 read: IOPS=2233, BW=8935KiB/s (9149kB/s)(255MiB/29211msec) 00:23:43.799 slat (nsec): min=4219, max=80428, avg=7543.29, stdev=2508.34 00:23:43.799 clat (usec): min=871, max=357845, avg=44616.85, stdev=23650.16 00:23:43.799 lat (usec): min=883, max=357851, avg=44624.40, stdev=23650.40 00:23:43.799 clat percentiles (msec): 00:23:43.799 | 1.00th=[ 11], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 39], 00:23:43.799 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 42], 00:23:43.799 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 48], 95.00th=[ 70], 00:23:43.799 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 234], 99.95th=[ 317], 00:23:43.799 | 99.99th=[ 351] 00:23:43.799 write: IOPS=2676, BW=10.5MiB/s (11.0MB/s)(256MiB/24490msec); 0 zone resets 00:23:43.799 slat (usec): min=4, max=204, avg= 9.67, stdev= 5.85 00:23:43.799 clat (usec): min=507, max=109092, avg=12604.56, stdev=20837.72 00:23:43.799 lat (usec): min=516, max=109099, avg=12614.22, stdev=20837.82 00:23:43.799 clat percentiles (usec): 00:23:43.799 | 1.00th=[ 996], 5.00th=[ 1270], 10.00th=[ 1434], 20.00th=[ 1696], 00:23:43.799 | 30.00th=[ 1991], 40.00th=[ 3687], 50.00th=[ 5735], 60.00th=[ 7308], 00:23:43.799 | 70.00th=[ 9765], 80.00th=[ 13566], 90.00th=[ 36439], 95.00th=[ 78119], 00:23:43.799 | 99.00th=[ 95945], 99.50th=[100140], 99.90th=[106431], 99.95th=[107480], 00:23:43.799 | 99.99th=[108528] 00:23:43.799 bw ( KiB/s): min= 272, max=45880, per=100.00%, avg=19417.70, stdev=11029.96, samples=27 00:23:43.799 iops : min= 68, max=11470, avg=4854.41, stdev=2757.52, samples=27 00:23:43.799 lat (usec) : 750=0.04%, 1000=0.48% 00:23:43.799 lat (msec) : 2=14.77%, 4=5.65%, 10=14.84%, 20=10.03%, 50=46.62% 00:23:43.799 lat (msec) : 100=5.70%, 250=1.81%, 500=0.05% 00:23:43.799 cpu : usr=98.85%, sys=0.36%, ctx=51, majf=0, minf=5510 00:23:43.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:43.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.799 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:43.799 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:43.799 00:23:43.799 Run status group 0 (all jobs): 00:23:43.799 READ: bw=17.5MiB/s (18.3MB/s), 8935KiB/s-9007KiB/s (9149kB/s-9223kB/s), io=510MiB (535MB), run=29040-29211msec 00:23:43.799 WRITE: bw=18.8MiB/s (19.7MB/s), 9619KiB/s-10.5MiB/s (9850kB/s-11.0MB/s), io=512MiB (537MB), run=24490-27252msec 00:23:44.734 ----------------------------------------------------- 00:23:44.734 Suppressions used: 00:23:44.734 count bytes template 00:23:44.734 2 10 /usr/src/fio/parse.c 00:23:44.734 4 384 /usr/src/fio/iolog.c 00:23:44.734 1 8 libtcmalloc_minimal.so 00:23:44.734 1 904 libcrypto.so 00:23:44.734 ----------------------------------------------------- 00:23:44.734 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:44.734 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:23:44.735 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:44.735 09:20:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:23:44.993 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:23:44.993 fio-3.35 00:23:44.993 Starting 1 thread 00:24:03.122 00:24:03.122 test: (groupid=0, jobs=1): err= 0: pid=77610: Wed Nov 20 09:20:57 2024 00:24:03.122 read: IOPS=6327, BW=24.7MiB/s (25.9MB/s)(255MiB/10304msec) 00:24:03.122 slat (nsec): min=4346, max=51651, avg=7076.33, stdev=2300.99 00:24:03.122 clat (usec): min=893, max=41453, avg=20216.53, stdev=1819.48 00:24:03.122 lat (usec): min=898, max=41461, avg=20223.61, stdev=1819.56 00:24:03.122 clat percentiles (usec): 00:24:03.122 | 1.00th=[18744], 5.00th=[19006], 10.00th=[19268], 20.00th=[19530], 00:24:03.122 | 30.00th=[19530], 40.00th=[19792], 50.00th=[19792], 60.00th=[20055], 00:24:03.122 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21103], 95.00th=[22414], 00:24:03.122 | 99.00th=[24249], 99.50th=[36963], 99.90th=[40109], 99.95th=[40633], 00:24:03.122 | 99.99th=[41157] 00:24:03.122 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(256MiB/5889msec); 0 zone resets 00:24:03.122 slat (usec): min=5, max=776, avg=10.19, stdev= 7.99 00:24:03.122 clat (usec): min=660, max=73285, avg=11438.86, stdev=14037.44 00:24:03.122 lat (usec): min=668, max=73294, avg=11449.05, stdev=14037.51 00:24:03.122 clat percentiles (usec): 00:24:03.122 | 1.00th=[ 955], 5.00th=[ 1156], 10.00th=[ 1303], 20.00th=[ 1500], 00:24:03.122 | 30.00th=[ 1713], 40.00th=[ 2376], 50.00th=[ 7635], 60.00th=[ 8717], 00:24:03.122 | 70.00th=[10159], 80.00th=[13173], 90.00th=[40109], 95.00th=[43779], 00:24:03.122 | 99.00th=[50070], 99.50th=[52167], 99.90th=[56361], 99.95th=[61080], 00:24:03.122 | 99.99th=[70779] 00:24:03.122 bw ( KiB/s): min=31880, max=62291, per=98.13%, avg=43680.25, stdev=9085.61, samples=12 00:24:03.122 iops : min= 7970, max=15572, avg=10920.00, stdev=2271.26, samples=12 00:24:03.122 lat (usec) : 750=0.01%, 1000=0.78% 00:24:03.122 lat (msec) : 2=17.83%, 4=2.18%, 10=13.84%, 20=35.61%, 50=29.21% 00:24:03.122 lat (msec) : 100=0.54% 00:24:03.122 cpu : usr=98.70%, sys=0.51%, ctx=25, majf=0, minf=5565 00:24:03.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:03.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.122 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:03.122 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.122 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:03.122 00:24:03.122 Run status group 0 (all jobs): 00:24:03.122 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=255MiB (267MB), run=10304-10304msec 00:24:03.122 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=256MiB (268MB), run=5889-5889msec 00:24:04.497 ----------------------------------------------------- 00:24:04.497 Suppressions used: 00:24:04.497 count bytes template 00:24:04.497 1 5 /usr/src/fio/parse.c 00:24:04.497 2 192 /usr/src/fio/iolog.c 00:24:04.497 1 8 libtcmalloc_minimal.so 00:24:04.497 1 904 libcrypto.so 00:24:04.497 ----------------------------------------------------- 00:24:04.497 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:04.497 Remove shared memory files 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57970 /dev/shm/spdk_tgt_trace.pid75847 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:24:04.497 ************************************ 00:24:04.497 END TEST ftl_fio_basic 00:24:04.497 ************************************ 00:24:04.497 00:24:04.497 real 1m14.342s 00:24:04.497 user 2m44.772s 00:24:04.497 sys 0m4.207s 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.497 09:20:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:24:04.497 09:20:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:04.497 09:20:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:04.497 09:20:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.497 09:20:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:04.497 ************************************ 00:24:04.497 START TEST ftl_bdevperf 00:24:04.497 ************************************ 00:24:04.497 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:24:04.497 * Looking for test storage... 00:24:04.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:04.756 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.756 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.757 --rc genhtml_branch_coverage=1 00:24:04.757 --rc genhtml_function_coverage=1 00:24:04.757 --rc genhtml_legend=1 00:24:04.757 --rc geninfo_all_blocks=1 00:24:04.757 --rc geninfo_unexecuted_blocks=1 00:24:04.757 00:24:04.757 ' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.757 --rc genhtml_branch_coverage=1 00:24:04.757 --rc genhtml_function_coverage=1 00:24:04.757 --rc genhtml_legend=1 00:24:04.757 --rc geninfo_all_blocks=1 00:24:04.757 --rc geninfo_unexecuted_blocks=1 00:24:04.757 00:24:04.757 ' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.757 --rc genhtml_branch_coverage=1 00:24:04.757 --rc genhtml_function_coverage=1 00:24:04.757 --rc genhtml_legend=1 00:24:04.757 --rc geninfo_all_blocks=1 00:24:04.757 --rc geninfo_unexecuted_blocks=1 00:24:04.757 00:24:04.757 ' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.757 --rc genhtml_branch_coverage=1 00:24:04.757 --rc genhtml_function_coverage=1 00:24:04.757 --rc genhtml_legend=1 00:24:04.757 --rc geninfo_all_blocks=1 00:24:04.757 --rc geninfo_unexecuted_blocks=1 00:24:04.757 00:24:04.757 ' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77874 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77874 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77874 ']' 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.757 09:20:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:05.015 [2024-11-20 09:20:59.876806] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:24:05.015 [2024-11-20 09:20:59.877226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77874 ] 00:24:05.015 [2024-11-20 09:21:00.067023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.274 [2024-11-20 09:21:00.223032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:24:05.840 09:21:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:06.407 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:06.407 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:24:06.407 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:06.408 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:06.408 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:06.408 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:06.408 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:06.408 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:06.666 { 00:24:06.666 "name": "nvme0n1", 00:24:06.666 "aliases": [ 00:24:06.666 "2ba8bbb7-391e-49cd-a9cd-929901c02efb" 00:24:06.666 ], 00:24:06.666 "product_name": "NVMe disk", 00:24:06.666 "block_size": 4096, 00:24:06.666 "num_blocks": 1310720, 00:24:06.666 "uuid": "2ba8bbb7-391e-49cd-a9cd-929901c02efb", 00:24:06.666 "numa_id": -1, 00:24:06.666 "assigned_rate_limits": { 00:24:06.666 "rw_ios_per_sec": 0, 00:24:06.666 "rw_mbytes_per_sec": 0, 00:24:06.666 "r_mbytes_per_sec": 0, 00:24:06.666 "w_mbytes_per_sec": 0 00:24:06.666 }, 00:24:06.666 "claimed": true, 00:24:06.666 "claim_type": "read_many_write_one", 00:24:06.666 "zoned": false, 00:24:06.666 "supported_io_types": { 00:24:06.666 "read": true, 00:24:06.666 "write": true, 00:24:06.666 "unmap": true, 00:24:06.666 "flush": true, 00:24:06.666 "reset": true, 00:24:06.666 "nvme_admin": true, 00:24:06.666 "nvme_io": true, 00:24:06.666 "nvme_io_md": false, 00:24:06.666 "write_zeroes": true, 00:24:06.666 "zcopy": false, 00:24:06.666 "get_zone_info": false, 00:24:06.666 "zone_management": false, 00:24:06.666 "zone_append": false, 00:24:06.666 "compare": true, 00:24:06.666 "compare_and_write": false, 00:24:06.666 "abort": true, 00:24:06.666 "seek_hole": false, 00:24:06.666 "seek_data": false, 00:24:06.666 "copy": true, 00:24:06.666 "nvme_iov_md": false 00:24:06.666 }, 00:24:06.666 "driver_specific": { 00:24:06.666 "nvme": [ 00:24:06.666 { 00:24:06.666 "pci_address": "0000:00:11.0", 00:24:06.666 "trid": { 00:24:06.666 "trtype": "PCIe", 00:24:06.666 "traddr": "0000:00:11.0" 00:24:06.666 }, 00:24:06.666 "ctrlr_data": { 00:24:06.666 "cntlid": 0, 00:24:06.666 "vendor_id": "0x1b36", 00:24:06.666 "model_number": "QEMU NVMe Ctrl", 00:24:06.666 "serial_number": "12341", 00:24:06.666 "firmware_revision": "8.0.0", 00:24:06.666 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:06.666 "oacs": { 00:24:06.666 "security": 0, 00:24:06.666 "format": 1, 00:24:06.666 "firmware": 0, 00:24:06.666 "ns_manage": 1 00:24:06.666 }, 00:24:06.666 "multi_ctrlr": false, 00:24:06.666 "ana_reporting": false 00:24:06.666 }, 00:24:06.666 "vs": { 00:24:06.666 "nvme_version": "1.4" 00:24:06.666 }, 00:24:06.666 "ns_data": { 00:24:06.666 "id": 1, 00:24:06.666 "can_share": false 00:24:06.666 } 00:24:06.666 } 00:24:06.666 ], 00:24:06.666 "mp_policy": "active_passive" 00:24:06.666 } 00:24:06.666 } 00:24:06.666 ]' 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:06.666 09:21:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:07.233 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2 00:24:07.233 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:24:07.233 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08bbcb87-6c46-4ed0-83ee-7b8f5d9be8b2 00:24:07.233 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:07.800 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:24:08.059 09:21:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:08.059 09:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:08.059 09:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:08.059 09:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:08.059 09:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:08.059 09:21:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:08.317 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:08.317 { 00:24:08.317 "name": "a7d51db4-9118-495c-a1ff-5d21bf2f6c08", 00:24:08.317 "aliases": [ 00:24:08.317 "lvs/nvme0n1p0" 00:24:08.317 ], 00:24:08.317 "product_name": "Logical Volume", 00:24:08.317 "block_size": 4096, 00:24:08.317 "num_blocks": 26476544, 00:24:08.317 "uuid": "a7d51db4-9118-495c-a1ff-5d21bf2f6c08", 00:24:08.317 "assigned_rate_limits": { 00:24:08.317 "rw_ios_per_sec": 0, 00:24:08.317 "rw_mbytes_per_sec": 0, 00:24:08.317 "r_mbytes_per_sec": 0, 00:24:08.317 "w_mbytes_per_sec": 0 00:24:08.317 }, 00:24:08.317 "claimed": false, 00:24:08.318 "zoned": false, 00:24:08.318 "supported_io_types": { 00:24:08.318 "read": true, 00:24:08.318 "write": true, 00:24:08.318 "unmap": true, 00:24:08.318 "flush": false, 00:24:08.318 "reset": true, 00:24:08.318 "nvme_admin": false, 00:24:08.318 "nvme_io": false, 00:24:08.318 "nvme_io_md": false, 00:24:08.318 "write_zeroes": true, 00:24:08.318 "zcopy": false, 00:24:08.318 "get_zone_info": false, 00:24:08.318 "zone_management": false, 00:24:08.318 "zone_append": false, 00:24:08.318 "compare": false, 00:24:08.318 "compare_and_write": false, 00:24:08.318 "abort": false, 00:24:08.318 "seek_hole": true, 00:24:08.318 "seek_data": true, 00:24:08.318 "copy": false, 00:24:08.318 "nvme_iov_md": false 00:24:08.318 }, 00:24:08.318 "driver_specific": { 00:24:08.318 "lvol": { 00:24:08.318 "lvol_store_uuid": "2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27", 00:24:08.318 "base_bdev": "nvme0n1", 00:24:08.318 "thin_provision": true, 00:24:08.318 "num_allocated_clusters": 0, 00:24:08.318 "snapshot": false, 00:24:08.318 "clone": false, 00:24:08.318 "esnap_clone": false 00:24:08.318 } 00:24:08.318 } 00:24:08.318 } 00:24:08.318 ]' 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:24:08.318 09:21:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:08.931 09:21:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:08.931 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:08.931 { 00:24:08.931 "name": "a7d51db4-9118-495c-a1ff-5d21bf2f6c08", 00:24:08.931 "aliases": [ 00:24:08.931 "lvs/nvme0n1p0" 00:24:08.931 ], 00:24:08.931 "product_name": "Logical Volume", 00:24:08.931 "block_size": 4096, 00:24:08.931 "num_blocks": 26476544, 00:24:08.931 "uuid": "a7d51db4-9118-495c-a1ff-5d21bf2f6c08", 00:24:08.931 "assigned_rate_limits": { 00:24:08.931 "rw_ios_per_sec": 0, 00:24:08.931 "rw_mbytes_per_sec": 0, 00:24:08.931 "r_mbytes_per_sec": 0, 00:24:08.931 "w_mbytes_per_sec": 0 00:24:08.931 }, 00:24:08.931 "claimed": false, 00:24:08.931 "zoned": false, 00:24:08.931 "supported_io_types": { 00:24:08.931 "read": true, 00:24:08.931 "write": true, 00:24:08.931 "unmap": true, 00:24:08.931 "flush": false, 00:24:08.931 "reset": true, 00:24:08.931 "nvme_admin": false, 00:24:08.931 "nvme_io": false, 00:24:08.931 "nvme_io_md": false, 00:24:08.931 "write_zeroes": true, 00:24:08.931 "zcopy": false, 00:24:08.931 "get_zone_info": false, 00:24:08.931 "zone_management": false, 00:24:08.931 "zone_append": false, 00:24:08.931 "compare": false, 00:24:08.931 "compare_and_write": false, 00:24:08.931 "abort": false, 00:24:08.931 "seek_hole": true, 00:24:08.931 "seek_data": true, 00:24:08.931 "copy": false, 00:24:08.931 "nvme_iov_md": false 00:24:08.931 }, 00:24:08.931 "driver_specific": { 00:24:08.931 "lvol": { 00:24:08.931 "lvol_store_uuid": "2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27", 00:24:08.931 "base_bdev": "nvme0n1", 00:24:08.931 "thin_provision": true, 00:24:08.931 "num_allocated_clusters": 0, 00:24:08.931 "snapshot": false, 00:24:08.931 "clone": false, 00:24:08.931 "esnap_clone": false 00:24:08.931 } 00:24:08.931 } 00:24:08.931 } 00:24:08.931 ]' 00:24:08.931 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:24:09.189 09:21:04 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:24:09.447 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a7d51db4-9118-495c-a1ff-5d21bf2f6c08 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:10.014 { 00:24:10.014 "name": "a7d51db4-9118-495c-a1ff-5d21bf2f6c08", 00:24:10.014 "aliases": [ 00:24:10.014 "lvs/nvme0n1p0" 00:24:10.014 ], 00:24:10.014 "product_name": "Logical Volume", 00:24:10.014 "block_size": 4096, 00:24:10.014 "num_blocks": 26476544, 00:24:10.014 "uuid": "a7d51db4-9118-495c-a1ff-5d21bf2f6c08", 00:24:10.014 "assigned_rate_limits": { 00:24:10.014 "rw_ios_per_sec": 0, 00:24:10.014 "rw_mbytes_per_sec": 0, 00:24:10.014 "r_mbytes_per_sec": 0, 00:24:10.014 "w_mbytes_per_sec": 0 00:24:10.014 }, 00:24:10.014 "claimed": false, 00:24:10.014 "zoned": false, 00:24:10.014 "supported_io_types": { 00:24:10.014 "read": true, 00:24:10.014 "write": true, 00:24:10.014 "unmap": true, 00:24:10.014 "flush": false, 00:24:10.014 "reset": true, 00:24:10.014 "nvme_admin": false, 00:24:10.014 "nvme_io": false, 00:24:10.014 "nvme_io_md": false, 00:24:10.014 "write_zeroes": true, 00:24:10.014 "zcopy": false, 00:24:10.014 "get_zone_info": false, 00:24:10.014 "zone_management": false, 00:24:10.014 "zone_append": false, 00:24:10.014 "compare": false, 00:24:10.014 "compare_and_write": false, 00:24:10.014 "abort": false, 00:24:10.014 "seek_hole": true, 00:24:10.014 "seek_data": true, 00:24:10.014 "copy": false, 00:24:10.014 "nvme_iov_md": false 00:24:10.014 }, 00:24:10.014 "driver_specific": { 00:24:10.014 "lvol": { 00:24:10.014 "lvol_store_uuid": "2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27", 00:24:10.014 "base_bdev": "nvme0n1", 00:24:10.014 "thin_provision": true, 00:24:10.014 "num_allocated_clusters": 0, 00:24:10.014 "snapshot": false, 00:24:10.014 "clone": false, 00:24:10.014 "esnap_clone": false 00:24:10.014 } 00:24:10.014 } 00:24:10.014 } 00:24:10.014 ]' 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:24:10.014 09:21:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a7d51db4-9118-495c-a1ff-5d21bf2f6c08 -c nvc0n1p0 --l2p_dram_limit 20 00:24:10.273 [2024-11-20 09:21:05.257762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.257845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.273 [2024-11-20 09:21:05.257869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:10.273 [2024-11-20 09:21:05.257886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.257980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.258006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.273 [2024-11-20 09:21:05.258021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:10.273 [2024-11-20 09:21:05.258036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.258066] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.273 [2024-11-20 09:21:05.259188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.273 [2024-11-20 09:21:05.259226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.259244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.273 [2024-11-20 09:21:05.259258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:24:10.273 [2024-11-20 09:21:05.259272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.259408] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 686df8b0-28c2-4a9b-82f9-c6b8e367751f 00:24:10.273 [2024-11-20 09:21:05.261331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.261537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:10.273 [2024-11-20 09:21:05.261573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:10.273 [2024-11-20 09:21:05.261592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.271492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.271565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.273 [2024-11-20 09:21:05.271591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.782 ms 00:24:10.273 [2024-11-20 09:21:05.271605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.271820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.271848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.273 [2024-11-20 09:21:05.271872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:10.273 [2024-11-20 09:21:05.271885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.272006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.272027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:10.273 [2024-11-20 09:21:05.272043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:10.273 [2024-11-20 09:21:05.272057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.272095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:10.273 [2024-11-20 09:21:05.277456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.277512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.273 [2024-11-20 09:21:05.277530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:24:10.273 [2024-11-20 09:21:05.277559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.277616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.273 [2024-11-20 09:21:05.277636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:10.273 [2024-11-20 09:21:05.277664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:10.273 [2024-11-20 09:21:05.277681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.273 [2024-11-20 09:21:05.277740] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:10.273 [2024-11-20 09:21:05.277915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:10.273 [2024-11-20 09:21:05.277935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:10.273 [2024-11-20 09:21:05.277955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:10.273 [2024-11-20 09:21:05.277971] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:10.273 [2024-11-20 09:21:05.277989] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:10.273 [2024-11-20 09:21:05.278008] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:10.274 [2024-11-20 09:21:05.278024] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:10.274 [2024-11-20 09:21:05.278036] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:10.274 [2024-11-20 09:21:05.278050] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:10.274 [2024-11-20 09:21:05.278064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.274 [2024-11-20 09:21:05.278082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:10.274 [2024-11-20 09:21:05.278096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:24:10.274 [2024-11-20 09:21:05.278112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.274 [2024-11-20 09:21:05.278220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.274 [2024-11-20 09:21:05.278243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:10.274 [2024-11-20 09:21:05.278258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:10.274 [2024-11-20 09:21:05.278276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.274 [2024-11-20 09:21:05.278380] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:10.274 [2024-11-20 09:21:05.278401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:10.274 [2024-11-20 09:21:05.278417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:10.274 [2024-11-20 09:21:05.278460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:10.274 [2024-11-20 09:21:05.278498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.274 [2024-11-20 09:21:05.278524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:10.274 [2024-11-20 09:21:05.278540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:10.274 [2024-11-20 09:21:05.278551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.274 [2024-11-20 09:21:05.278584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:10.274 [2024-11-20 09:21:05.278597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:10.274 [2024-11-20 09:21:05.278614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:10.274 [2024-11-20 09:21:05.278641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:10.274 [2024-11-20 09:21:05.278720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:10.274 [2024-11-20 09:21:05.278761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:10.274 [2024-11-20 09:21:05.278800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:10.274 [2024-11-20 09:21:05.278840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:10.274 [2024-11-20 09:21:05.278876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:10.274 [2024-11-20 09:21:05.278888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.274 [2024-11-20 09:21:05.278913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:10.274 [2024-11-20 09:21:05.278927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:10.274 [2024-11-20 09:21:05.278938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.274 [2024-11-20 09:21:05.278952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:10.274 [2024-11-20 09:21:05.278963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:10.274 [2024-11-20 09:21:05.278977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.278989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:10.274 [2024-11-20 09:21:05.279003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:10.274 [2024-11-20 09:21:05.279014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.279028] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:10.274 [2024-11-20 09:21:05.279041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:10.274 [2024-11-20 09:21:05.279055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.274 [2024-11-20 09:21:05.279067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.274 [2024-11-20 09:21:05.279092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:10.274 [2024-11-20 09:21:05.279105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:10.274 [2024-11-20 09:21:05.279119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:10.274 [2024-11-20 09:21:05.279131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:10.274 [2024-11-20 09:21:05.279145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:10.274 [2024-11-20 09:21:05.279156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:10.274 [2024-11-20 09:21:05.279175] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:10.274 [2024-11-20 09:21:05.279191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:10.274 [2024-11-20 09:21:05.279226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:10.274 [2024-11-20 09:21:05.279242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:10.274 [2024-11-20 09:21:05.279255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:10.274 [2024-11-20 09:21:05.279269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:10.274 [2024-11-20 09:21:05.279282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:10.274 [2024-11-20 09:21:05.279296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:10.274 [2024-11-20 09:21:05.279309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:10.274 [2024-11-20 09:21:05.279326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:10.274 [2024-11-20 09:21:05.279338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:10.274 [2024-11-20 09:21:05.279408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:10.274 [2024-11-20 09:21:05.279423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:10.274 [2024-11-20 09:21:05.279454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:10.274 [2024-11-20 09:21:05.279470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:10.274 [2024-11-20 09:21:05.279482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:10.274 [2024-11-20 09:21:05.279498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.274 [2024-11-20 09:21:05.279515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:10.274 [2024-11-20 09:21:05.279530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:24:10.274 [2024-11-20 09:21:05.279542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.274 [2024-11-20 09:21:05.279600] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:10.274 [2024-11-20 09:21:05.279617] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:13.558 [2024-11-20 09:21:07.947249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:07.947343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:13.558 [2024-11-20 09:21:07.947405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2667.660 ms 00:24:13.558 [2024-11-20 09:21:07.947419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:07.988958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:07.989020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.558 [2024-11-20 09:21:07.989047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.227 ms 00:24:13.558 [2024-11-20 09:21:07.989076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:07.989271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:07.989291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:13.558 [2024-11-20 09:21:07.989325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:13.558 [2024-11-20 09:21:07.989352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.046241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.046315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.558 [2024-11-20 09:21:08.046343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.833 ms 00:24:13.558 [2024-11-20 09:21:08.046357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.046428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.046449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.558 [2024-11-20 09:21:08.046482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:13.558 [2024-11-20 09:21:08.046509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.047301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.047330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.558 [2024-11-20 09:21:08.047349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:24:13.558 [2024-11-20 09:21:08.047362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.047572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.047605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.558 [2024-11-20 09:21:08.047623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:24:13.558 [2024-11-20 09:21:08.047635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.068618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.068717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.558 [2024-11-20 09:21:08.068741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.941 ms 00:24:13.558 [2024-11-20 09:21:08.068754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.083301] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:24:13.558 [2024-11-20 09:21:08.091600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.091700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:13.558 [2024-11-20 09:21:08.091723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.721 ms 00:24:13.558 [2024-11-20 09:21:08.091739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.172215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.172478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:13.558 [2024-11-20 09:21:08.172540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.419 ms 00:24:13.558 [2024-11-20 09:21:08.172558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.172820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.172855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:13.558 [2024-11-20 09:21:08.172885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:24:13.558 [2024-11-20 09:21:08.172915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.205263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.205327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:13.558 [2024-11-20 09:21:08.205345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.255 ms 00:24:13.558 [2024-11-20 09:21:08.205361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.236803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.236853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:13.558 [2024-11-20 09:21:08.236872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.382 ms 00:24:13.558 [2024-11-20 09:21:08.236892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.237796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.237831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:13.558 [2024-11-20 09:21:08.237847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:24:13.558 [2024-11-20 09:21:08.237862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.328265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.328352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:13.558 [2024-11-20 09:21:08.328375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.342 ms 00:24:13.558 [2024-11-20 09:21:08.328391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.361054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.361122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:13.558 [2024-11-20 09:21:08.361142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.562 ms 00:24:13.558 [2024-11-20 09:21:08.361162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.391401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.391467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:13.558 [2024-11-20 09:21:08.391485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.192 ms 00:24:13.558 [2024-11-20 09:21:08.391500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.419873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.419922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:13.558 [2024-11-20 09:21:08.419939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.328 ms 00:24:13.558 [2024-11-20 09:21:08.419954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.420004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.420029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:13.558 [2024-11-20 09:21:08.420043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:13.558 [2024-11-20 09:21:08.420057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.420180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.558 [2024-11-20 09:21:08.420202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:13.558 [2024-11-20 09:21:08.420215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:13.558 [2024-11-20 09:21:08.420228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.558 [2024-11-20 09:21:08.421539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3163.242 ms, result 0 00:24:13.558 { 00:24:13.558 "name": "ftl0", 00:24:13.558 "uuid": "686df8b0-28c2-4a9b-82f9-c6b8e367751f" 00:24:13.558 } 00:24:13.558 09:21:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:24:13.558 09:21:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:24:13.558 09:21:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:24:13.816 09:21:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:24:13.817 [2024-11-20 09:21:08.878012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:13.817 I/O size of 69632 is greater than zero copy threshold (65536). 00:24:13.817 Zero copy mechanism will not be used. 00:24:13.817 Running I/O for 4 seconds... 00:24:16.126 1670.00 IOPS, 110.90 MiB/s [2024-11-20T09:21:12.178Z] 1725.00 IOPS, 114.55 MiB/s [2024-11-20T09:21:13.111Z] 1746.00 IOPS, 115.95 MiB/s [2024-11-20T09:21:13.111Z] 1737.00 IOPS, 115.35 MiB/s 00:24:17.991 Latency(us) 00:24:17.991 [2024-11-20T09:21:13.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.991 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:24:17.991 ftl0 : 4.00 1736.30 115.30 0.00 0.00 601.68 245.76 2651.23 00:24:17.991 [2024-11-20T09:21:13.111Z] =================================================================================================================== 00:24:17.991 [2024-11-20T09:21:13.111Z] Total : 1736.30 115.30 0.00 0.00 601.68 245.76 2651.23 00:24:17.991 [2024-11-20 09:21:12.891449] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:17.991 { 00:24:17.991 "results": [ 00:24:17.991 { 00:24:17.991 "job": "ftl0", 00:24:17.991 "core_mask": "0x1", 00:24:17.991 "workload": "randwrite", 00:24:17.991 "status": "finished", 00:24:17.991 "queue_depth": 1, 00:24:17.991 "io_size": 69632, 00:24:17.991 "runtime": 4.002199, 00:24:17.991 "iops": 1736.2954715645074, 00:24:17.991 "mibps": 115.30087115858056, 00:24:17.991 "io_failed": 0, 00:24:17.991 "io_timeout": 0, 00:24:17.991 "avg_latency_us": 601.6757895838512, 00:24:17.991 "min_latency_us": 245.76, 00:24:17.991 "max_latency_us": 2651.2290909090907 00:24:17.991 } 00:24:17.991 ], 00:24:17.991 "core_count": 1 00:24:17.991 } 00:24:17.991 09:21:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:24:17.991 [2024-11-20 09:21:13.027587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:17.991 Running I/O for 4 seconds... 00:24:20.305 7360.00 IOPS, 28.75 MiB/s [2024-11-20T09:21:16.359Z] 7324.50 IOPS, 28.61 MiB/s [2024-11-20T09:21:17.295Z] 7473.33 IOPS, 29.19 MiB/s [2024-11-20T09:21:17.295Z] 7307.50 IOPS, 28.54 MiB/s 00:24:22.175 Latency(us) 00:24:22.175 [2024-11-20T09:21:17.295Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.175 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:24:22.175 ftl0 : 4.03 7292.61 28.49 0.00 0.00 17495.33 331.40 34078.72 00:24:22.175 [2024-11-20T09:21:17.295Z] =================================================================================================================== 00:24:22.175 [2024-11-20T09:21:17.295Z] Total : 7292.61 28.49 0.00 0.00 17495.33 0.00 34078.72 00:24:22.175 [2024-11-20 09:21:17.067287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:22.175 { 00:24:22.175 "results": [ 00:24:22.175 { 00:24:22.175 "job": "ftl0", 00:24:22.175 "core_mask": "0x1", 00:24:22.175 "workload": "randwrite", 00:24:22.175 "status": "finished", 00:24:22.175 "queue_depth": 128, 00:24:22.175 "io_size": 4096, 00:24:22.175 "runtime": 4.025719, 00:24:22.175 "iops": 7292.610338674905, 00:24:22.175 "mibps": 28.486759135448846, 00:24:22.175 "io_failed": 0, 00:24:22.175 "io_timeout": 0, 00:24:22.175 "avg_latency_us": 17495.333148530055, 00:24:22.175 "min_latency_us": 331.40363636363634, 00:24:22.175 "max_latency_us": 34078.72 00:24:22.175 } 00:24:22.175 ], 00:24:22.175 "core_count": 1 00:24:22.175 } 00:24:22.175 09:21:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:24:22.175 [2024-11-20 09:21:17.219562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:24:22.175 Running I/O for 4 seconds... 00:24:24.494 5453.00 IOPS, 21.30 MiB/s [2024-11-20T09:21:20.548Z] 5448.50 IOPS, 21.28 MiB/s [2024-11-20T09:21:21.486Z] 5633.00 IOPS, 22.00 MiB/s [2024-11-20T09:21:21.486Z] 5706.75 IOPS, 22.29 MiB/s 00:24:26.366 Latency(us) 00:24:26.366 [2024-11-20T09:21:21.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.366 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:26.366 Verification LBA range: start 0x0 length 0x1400000 00:24:26.366 ftl0 : 4.01 5718.15 22.34 0.00 0.00 22304.99 374.23 29550.78 00:24:26.366 [2024-11-20T09:21:21.486Z] =================================================================================================================== 00:24:26.366 [2024-11-20T09:21:21.486Z] Total : 5718.15 22.34 0.00 0.00 22304.99 0.00 29550.78 00:24:26.366 { 00:24:26.366 "results": [ 00:24:26.366 { 00:24:26.366 "job": "ftl0", 00:24:26.366 "core_mask": "0x1", 00:24:26.366 "workload": "verify", 00:24:26.366 "status": "finished", 00:24:26.366 "verify_range": { 00:24:26.366 "start": 0, 00:24:26.366 "length": 20971520 00:24:26.366 }, 00:24:26.366 "queue_depth": 128, 00:24:26.366 "io_size": 4096, 00:24:26.366 "runtime": 4.013889, 00:24:26.366 "iops": 5718.145170431968, 00:24:26.366 "mibps": 22.336504571999875, 00:24:26.366 "io_failed": 0, 00:24:26.366 "io_timeout": 0, 00:24:26.366 "avg_latency_us": 22304.988948952756, 00:24:26.366 "min_latency_us": 374.22545454545457, 00:24:26.366 "max_latency_us": 29550.778181818183 00:24:26.366 } 00:24:26.366 ], 00:24:26.366 "core_count": 1 00:24:26.366 } 00:24:26.366 [2024-11-20 09:21:21.258761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:24:26.366 09:21:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:24:26.624 [2024-11-20 09:21:21.565046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.624 [2024-11-20 09:21:21.565133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:26.624 [2024-11-20 09:21:21.565173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.624 [2024-11-20 09:21:21.565198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.624 [2024-11-20 09:21:21.565251] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:26.624 [2024-11-20 09:21:21.569149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.624 [2024-11-20 09:21:21.569196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:26.624 [2024-11-20 09:21:21.569231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.854 ms 00:24:26.624 [2024-11-20 09:21:21.569257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.624 [2024-11-20 09:21:21.570965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.624 [2024-11-20 09:21:21.571015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:26.624 [2024-11-20 09:21:21.571055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:24:26.624 [2024-11-20 09:21:21.571080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.755244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.755541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:26.884 [2024-11-20 09:21:21.755613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 184.095 ms 00:24:26.884 [2024-11-20 09:21:21.755641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.762474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.762672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:26.884 [2024-11-20 09:21:21.762730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.715 ms 00:24:26.884 [2024-11-20 09:21:21.762756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.795434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.795516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:26.884 [2024-11-20 09:21:21.795555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.480 ms 00:24:26.884 [2024-11-20 09:21:21.795577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.815781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.815852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:26.884 [2024-11-20 09:21:21.815898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.121 ms 00:24:26.884 [2024-11-20 09:21:21.815919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.816176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.816211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:26.884 [2024-11-20 09:21:21.816246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:24:26.884 [2024-11-20 09:21:21.816268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.848321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.848398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:26.884 [2024-11-20 09:21:21.848437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.002 ms 00:24:26.884 [2024-11-20 09:21:21.848458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.879834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.879911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:26.884 [2024-11-20 09:21:21.879952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.277 ms 00:24:26.884 [2024-11-20 09:21:21.879974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.911633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.911731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:26.884 [2024-11-20 09:21:21.911771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.565 ms 00:24:26.884 [2024-11-20 09:21:21.911792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.943012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.884 [2024-11-20 09:21:21.943310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:26.884 [2024-11-20 09:21:21.943376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.988 ms 00:24:26.884 [2024-11-20 09:21:21.943405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.884 [2024-11-20 09:21:21.943487] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:26.884 [2024-11-20 09:21:21.943523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.943986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:26.884 [2024-11-20 09:21:21.944576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.944989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.945986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:26.885 [2024-11-20 09:21:21.946256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:26.885 [2024-11-20 09:21:21.946283] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 686df8b0-28c2-4a9b-82f9-c6b8e367751f 00:24:26.885 [2024-11-20 09:21:21.946306] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:26.885 [2024-11-20 09:21:21.946331] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:26.885 [2024-11-20 09:21:21.946357] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:26.885 [2024-11-20 09:21:21.946383] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:26.885 [2024-11-20 09:21:21.946403] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:26.885 [2024-11-20 09:21:21.946429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:26.885 [2024-11-20 09:21:21.946451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:26.885 [2024-11-20 09:21:21.946478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:26.885 [2024-11-20 09:21:21.946497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:26.885 [2024-11-20 09:21:21.946524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.885 [2024-11-20 09:21:21.946547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:26.885 [2024-11-20 09:21:21.946574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:24:26.885 [2024-11-20 09:21:21.946596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.885 [2024-11-20 09:21:21.965726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.885 [2024-11-20 09:21:21.965792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:26.885 [2024-11-20 09:21:21.965830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.595 ms 00:24:26.885 [2024-11-20 09:21:21.965852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.885 [2024-11-20 09:21:21.966569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.885 [2024-11-20 09:21:21.966618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:26.885 [2024-11-20 09:21:21.966684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:24:26.885 [2024-11-20 09:21:21.966712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.015978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.016091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:27.144 [2024-11-20 09:21:22.016129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.016147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.016285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.016313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:27.144 [2024-11-20 09:21:22.016341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.016364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.016607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.016640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:27.144 [2024-11-20 09:21:22.016700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.016719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.016758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.016779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:27.144 [2024-11-20 09:21:22.016800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.016816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.131435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.131523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:27.144 [2024-11-20 09:21:22.131553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.131567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.219995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.220260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:27.144 [2024-11-20 09:21:22.220299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.220314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.220473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.220494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.144 [2024-11-20 09:21:22.220516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.220528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.220602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.220622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.144 [2024-11-20 09:21:22.220638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.220677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.220820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.220841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.144 [2024-11-20 09:21:22.220865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.220877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.220936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.220955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:27.144 [2024-11-20 09:21:22.220972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.221000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.221055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.221072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.144 [2024-11-20 09:21:22.221088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.221104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.221166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:27.144 [2024-11-20 09:21:22.221196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.144 [2024-11-20 09:21:22.221214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:27.144 [2024-11-20 09:21:22.221227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.144 [2024-11-20 09:21:22.221396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 656.296 ms, result 0 00:24:27.144 true 00:24:27.144 09:21:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77874 00:24:27.144 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77874 ']' 00:24:27.144 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77874 00:24:27.144 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:27.144 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.144 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77874 00:24:27.403 killing process with pid 77874 00:24:27.403 Received shutdown signal, test time was about 4.000000 seconds 00:24:27.403 00:24:27.403 Latency(us) 00:24:27.403 [2024-11-20T09:21:22.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.403 [2024-11-20T09:21:22.523Z] =================================================================================================================== 00:24:27.403 [2024-11-20T09:21:22.523Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.403 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.403 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.403 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77874' 00:24:27.403 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77874 00:24:27.403 09:21:22 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77874 00:24:31.588 Remove shared memory files 00:24:31.588 09:21:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:31.588 09:21:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:24:31.588 09:21:26 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:31.588 09:21:26 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:24:31.589 09:21:26 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:24:31.589 09:21:26 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:24:31.589 09:21:26 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:31.589 09:21:26 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:24:31.589 ************************************ 00:24:31.589 END TEST ftl_bdevperf 00:24:31.589 ************************************ 00:24:31.589 00:24:31.589 real 0m26.488s 00:24:31.589 user 0m30.704s 00:24:31.589 sys 0m1.319s 00:24:31.589 09:21:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.589 09:21:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:31.589 09:21:26 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:31.589 09:21:26 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:31.589 09:21:26 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.589 09:21:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:31.589 ************************************ 00:24:31.589 START TEST ftl_trim 00:24:31.589 ************************************ 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:24:31.589 * Looking for test storage... 00:24:31.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.589 09:21:26 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.589 --rc genhtml_branch_coverage=1 00:24:31.589 --rc genhtml_function_coverage=1 00:24:31.589 --rc genhtml_legend=1 00:24:31.589 --rc geninfo_all_blocks=1 00:24:31.589 --rc geninfo_unexecuted_blocks=1 00:24:31.589 00:24:31.589 ' 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.589 --rc genhtml_branch_coverage=1 00:24:31.589 --rc genhtml_function_coverage=1 00:24:31.589 --rc genhtml_legend=1 00:24:31.589 --rc geninfo_all_blocks=1 00:24:31.589 --rc geninfo_unexecuted_blocks=1 00:24:31.589 00:24:31.589 ' 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.589 --rc genhtml_branch_coverage=1 00:24:31.589 --rc genhtml_function_coverage=1 00:24:31.589 --rc genhtml_legend=1 00:24:31.589 --rc geninfo_all_blocks=1 00:24:31.589 --rc geninfo_unexecuted_blocks=1 00:24:31.589 00:24:31.589 ' 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.589 --rc genhtml_branch_coverage=1 00:24:31.589 --rc genhtml_function_coverage=1 00:24:31.589 --rc genhtml_legend=1 00:24:31.589 --rc geninfo_all_blocks=1 00:24:31.589 --rc geninfo_unexecuted_blocks=1 00:24:31.589 00:24:31.589 ' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78234 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:24:31.589 09:21:26 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78234 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78234 ']' 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.589 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.590 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.590 09:21:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:31.590 [2024-11-20 09:21:26.460563] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:24:31.590 [2024-11-20 09:21:26.460771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78234 ] 00:24:31.590 [2024-11-20 09:21:26.651911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:31.848 [2024-11-20 09:21:26.816855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.848 [2024-11-20 09:21:26.816967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.848 [2024-11-20 09:21:26.816982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.783 09:21:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.783 09:21:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:32.783 09:21:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:32.783 09:21:27 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:24:32.783 09:21:27 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:32.783 09:21:27 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:24:32.783 09:21:27 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:24:32.783 09:21:27 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:33.349 09:21:28 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:33.349 09:21:28 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:24:33.349 09:21:28 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:33.349 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:33.349 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:33.349 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:33.349 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:33.349 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:33.606 { 00:24:33.606 "name": "nvme0n1", 00:24:33.606 "aliases": [ 00:24:33.606 "dfab477d-839a-4fc8-b0e6-d991e72110b3" 00:24:33.606 ], 00:24:33.606 "product_name": "NVMe disk", 00:24:33.606 "block_size": 4096, 00:24:33.606 "num_blocks": 1310720, 00:24:33.606 "uuid": "dfab477d-839a-4fc8-b0e6-d991e72110b3", 00:24:33.606 "numa_id": -1, 00:24:33.606 "assigned_rate_limits": { 00:24:33.606 "rw_ios_per_sec": 0, 00:24:33.606 "rw_mbytes_per_sec": 0, 00:24:33.606 "r_mbytes_per_sec": 0, 00:24:33.606 "w_mbytes_per_sec": 0 00:24:33.606 }, 00:24:33.606 "claimed": true, 00:24:33.606 "claim_type": "read_many_write_one", 00:24:33.606 "zoned": false, 00:24:33.606 "supported_io_types": { 00:24:33.606 "read": true, 00:24:33.606 "write": true, 00:24:33.606 "unmap": true, 00:24:33.606 "flush": true, 00:24:33.606 "reset": true, 00:24:33.606 "nvme_admin": true, 00:24:33.606 "nvme_io": true, 00:24:33.606 "nvme_io_md": false, 00:24:33.606 "write_zeroes": true, 00:24:33.606 "zcopy": false, 00:24:33.606 "get_zone_info": false, 00:24:33.606 "zone_management": false, 00:24:33.606 "zone_append": false, 00:24:33.606 "compare": true, 00:24:33.606 "compare_and_write": false, 00:24:33.606 "abort": true, 00:24:33.606 "seek_hole": false, 00:24:33.606 "seek_data": false, 00:24:33.606 "copy": true, 00:24:33.606 "nvme_iov_md": false 00:24:33.606 }, 00:24:33.606 "driver_specific": { 00:24:33.606 "nvme": [ 00:24:33.606 { 00:24:33.606 "pci_address": "0000:00:11.0", 00:24:33.606 "trid": { 00:24:33.606 "trtype": "PCIe", 00:24:33.606 "traddr": "0000:00:11.0" 00:24:33.606 }, 00:24:33.606 "ctrlr_data": { 00:24:33.606 "cntlid": 0, 00:24:33.606 "vendor_id": "0x1b36", 00:24:33.606 "model_number": "QEMU NVMe Ctrl", 00:24:33.606 "serial_number": "12341", 00:24:33.606 "firmware_revision": "8.0.0", 00:24:33.606 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:33.606 "oacs": { 00:24:33.606 "security": 0, 00:24:33.606 "format": 1, 00:24:33.606 "firmware": 0, 00:24:33.606 "ns_manage": 1 00:24:33.606 }, 00:24:33.606 "multi_ctrlr": false, 00:24:33.606 "ana_reporting": false 00:24:33.606 }, 00:24:33.606 "vs": { 00:24:33.606 "nvme_version": "1.4" 00:24:33.606 }, 00:24:33.606 "ns_data": { 00:24:33.606 "id": 1, 00:24:33.606 "can_share": false 00:24:33.606 } 00:24:33.606 } 00:24:33.606 ], 00:24:33.606 "mp_policy": "active_passive" 00:24:33.606 } 00:24:33.606 } 00:24:33.606 ]' 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:33.606 09:21:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:24:33.606 09:21:28 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:24:33.606 09:21:28 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:33.606 09:21:28 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:24:33.606 09:21:28 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:33.606 09:21:28 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:34.173 09:21:29 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27 00:24:34.173 09:21:29 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:24:34.173 09:21:29 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2e3ca1d9-b1a3-4a64-8892-a730d2dc4e27 00:24:34.431 09:21:29 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:34.690 09:21:29 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b982fba5-de42-497c-9388-bbc9a5221875 00:24:34.690 09:21:29 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b982fba5-de42-497c-9388-bbc9a5221875 00:24:34.948 09:21:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:34.948 09:21:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:34.948 09:21:30 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:24:34.948 09:21:30 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:34.948 09:21:30 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:34.948 09:21:30 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:24:34.949 09:21:30 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:34.949 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:34.949 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:34.949 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:34.949 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:34.949 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:35.514 { 00:24:35.514 "name": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:35.514 "aliases": [ 00:24:35.514 "lvs/nvme0n1p0" 00:24:35.514 ], 00:24:35.514 "product_name": "Logical Volume", 00:24:35.514 "block_size": 4096, 00:24:35.514 "num_blocks": 26476544, 00:24:35.514 "uuid": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:35.514 "assigned_rate_limits": { 00:24:35.514 "rw_ios_per_sec": 0, 00:24:35.514 "rw_mbytes_per_sec": 0, 00:24:35.514 "r_mbytes_per_sec": 0, 00:24:35.514 "w_mbytes_per_sec": 0 00:24:35.514 }, 00:24:35.514 "claimed": false, 00:24:35.514 "zoned": false, 00:24:35.514 "supported_io_types": { 00:24:35.514 "read": true, 00:24:35.514 "write": true, 00:24:35.514 "unmap": true, 00:24:35.514 "flush": false, 00:24:35.514 "reset": true, 00:24:35.514 "nvme_admin": false, 00:24:35.514 "nvme_io": false, 00:24:35.514 "nvme_io_md": false, 00:24:35.514 "write_zeroes": true, 00:24:35.514 "zcopy": false, 00:24:35.514 "get_zone_info": false, 00:24:35.514 "zone_management": false, 00:24:35.514 "zone_append": false, 00:24:35.514 "compare": false, 00:24:35.514 "compare_and_write": false, 00:24:35.514 "abort": false, 00:24:35.514 "seek_hole": true, 00:24:35.514 "seek_data": true, 00:24:35.514 "copy": false, 00:24:35.514 "nvme_iov_md": false 00:24:35.514 }, 00:24:35.514 "driver_specific": { 00:24:35.514 "lvol": { 00:24:35.514 "lvol_store_uuid": "b982fba5-de42-497c-9388-bbc9a5221875", 00:24:35.514 "base_bdev": "nvme0n1", 00:24:35.514 "thin_provision": true, 00:24:35.514 "num_allocated_clusters": 0, 00:24:35.514 "snapshot": false, 00:24:35.514 "clone": false, 00:24:35.514 "esnap_clone": false 00:24:35.514 } 00:24:35.514 } 00:24:35.514 } 00:24:35.514 ]' 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:35.514 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:35.514 09:21:30 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:24:35.514 09:21:30 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:24:35.514 09:21:30 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:35.772 09:21:30 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:35.772 09:21:30 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:35.772 09:21:30 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:35.772 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:35.772 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:35.772 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:35.772 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:35.772 09:21:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:36.337 { 00:24:36.337 "name": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:36.337 "aliases": [ 00:24:36.337 "lvs/nvme0n1p0" 00:24:36.337 ], 00:24:36.337 "product_name": "Logical Volume", 00:24:36.337 "block_size": 4096, 00:24:36.337 "num_blocks": 26476544, 00:24:36.337 "uuid": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:36.337 "assigned_rate_limits": { 00:24:36.337 "rw_ios_per_sec": 0, 00:24:36.337 "rw_mbytes_per_sec": 0, 00:24:36.337 "r_mbytes_per_sec": 0, 00:24:36.337 "w_mbytes_per_sec": 0 00:24:36.337 }, 00:24:36.337 "claimed": false, 00:24:36.337 "zoned": false, 00:24:36.337 "supported_io_types": { 00:24:36.337 "read": true, 00:24:36.337 "write": true, 00:24:36.337 "unmap": true, 00:24:36.337 "flush": false, 00:24:36.337 "reset": true, 00:24:36.337 "nvme_admin": false, 00:24:36.337 "nvme_io": false, 00:24:36.337 "nvme_io_md": false, 00:24:36.337 "write_zeroes": true, 00:24:36.337 "zcopy": false, 00:24:36.337 "get_zone_info": false, 00:24:36.337 "zone_management": false, 00:24:36.337 "zone_append": false, 00:24:36.337 "compare": false, 00:24:36.337 "compare_and_write": false, 00:24:36.337 "abort": false, 00:24:36.337 "seek_hole": true, 00:24:36.337 "seek_data": true, 00:24:36.337 "copy": false, 00:24:36.337 "nvme_iov_md": false 00:24:36.337 }, 00:24:36.337 "driver_specific": { 00:24:36.337 "lvol": { 00:24:36.337 "lvol_store_uuid": "b982fba5-de42-497c-9388-bbc9a5221875", 00:24:36.337 "base_bdev": "nvme0n1", 00:24:36.337 "thin_provision": true, 00:24:36.337 "num_allocated_clusters": 0, 00:24:36.337 "snapshot": false, 00:24:36.337 "clone": false, 00:24:36.337 "esnap_clone": false 00:24:36.337 } 00:24:36.337 } 00:24:36.337 } 00:24:36.337 ]' 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:36.337 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:36.337 09:21:31 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:24:36.337 09:21:31 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:36.595 09:21:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:24:36.595 09:21:31 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:24:36.595 09:21:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:36.595 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:36.595 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:36.595 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:24:36.595 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:24:36.596 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:36.854 { 00:24:36.854 "name": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:36.854 "aliases": [ 00:24:36.854 "lvs/nvme0n1p0" 00:24:36.854 ], 00:24:36.854 "product_name": "Logical Volume", 00:24:36.854 "block_size": 4096, 00:24:36.854 "num_blocks": 26476544, 00:24:36.854 "uuid": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:36.854 "assigned_rate_limits": { 00:24:36.854 "rw_ios_per_sec": 0, 00:24:36.854 "rw_mbytes_per_sec": 0, 00:24:36.854 "r_mbytes_per_sec": 0, 00:24:36.854 "w_mbytes_per_sec": 0 00:24:36.854 }, 00:24:36.854 "claimed": false, 00:24:36.854 "zoned": false, 00:24:36.854 "supported_io_types": { 00:24:36.854 "read": true, 00:24:36.854 "write": true, 00:24:36.854 "unmap": true, 00:24:36.854 "flush": false, 00:24:36.854 "reset": true, 00:24:36.854 "nvme_admin": false, 00:24:36.854 "nvme_io": false, 00:24:36.854 "nvme_io_md": false, 00:24:36.854 "write_zeroes": true, 00:24:36.854 "zcopy": false, 00:24:36.854 "get_zone_info": false, 00:24:36.854 "zone_management": false, 00:24:36.854 "zone_append": false, 00:24:36.854 "compare": false, 00:24:36.854 "compare_and_write": false, 00:24:36.854 "abort": false, 00:24:36.854 "seek_hole": true, 00:24:36.854 "seek_data": true, 00:24:36.854 "copy": false, 00:24:36.854 "nvme_iov_md": false 00:24:36.854 }, 00:24:36.854 "driver_specific": { 00:24:36.854 "lvol": { 00:24:36.854 "lvol_store_uuid": "b982fba5-de42-497c-9388-bbc9a5221875", 00:24:36.854 "base_bdev": "nvme0n1", 00:24:36.854 "thin_provision": true, 00:24:36.854 "num_allocated_clusters": 0, 00:24:36.854 "snapshot": false, 00:24:36.854 "clone": false, 00:24:36.854 "esnap_clone": false 00:24:36.854 } 00:24:36.854 } 00:24:36.854 } 00:24:36.854 ]' 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:36.854 09:21:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:24:36.854 09:21:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:24:36.854 09:21:31 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7b97a44e-8bae-4b13-81ac-89c2769dc7f8 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:24:37.419 [2024-11-20 09:21:32.268499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.268572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:37.419 [2024-11-20 09:21:32.268597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:37.419 [2024-11-20 09:21:32.268610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.272320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.272365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.419 [2024-11-20 09:21:32.272386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:24:37.419 [2024-11-20 09:21:32.272398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.272541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:37.419 [2024-11-20 09:21:32.273511] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:37.419 [2024-11-20 09:21:32.273557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.273572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.419 [2024-11-20 09:21:32.273587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:24:37.419 [2024-11-20 09:21:32.273599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.273845] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:24:37.419 [2024-11-20 09:21:32.275673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.275717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:37.419 [2024-11-20 09:21:32.275735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:37.419 [2024-11-20 09:21:32.275749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.285393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.285467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.419 [2024-11-20 09:21:32.285489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.547 ms 00:24:37.419 [2024-11-20 09:21:32.285507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.285731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.285758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.419 [2024-11-20 09:21:32.285772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:24:37.419 [2024-11-20 09:21:32.285791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.285868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.285889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:37.419 [2024-11-20 09:21:32.285903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:37.419 [2024-11-20 09:21:32.285917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.285966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:37.419 [2024-11-20 09:21:32.291276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.291317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.419 [2024-11-20 09:21:32.291344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.316 ms 00:24:37.419 [2024-11-20 09:21:32.291356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.291436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.291453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:37.419 [2024-11-20 09:21:32.291469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:37.419 [2024-11-20 09:21:32.291501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.291547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:37.419 [2024-11-20 09:21:32.291729] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:37.419 [2024-11-20 09:21:32.291792] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:37.419 [2024-11-20 09:21:32.291810] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:37.419 [2024-11-20 09:21:32.291829] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:37.419 [2024-11-20 09:21:32.291843] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:37.419 [2024-11-20 09:21:32.291858] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:37.419 [2024-11-20 09:21:32.291870] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:37.419 [2024-11-20 09:21:32.291884] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:37.419 [2024-11-20 09:21:32.291898] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:37.419 [2024-11-20 09:21:32.291913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.291924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:37.419 [2024-11-20 09:21:32.291942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:24:37.419 [2024-11-20 09:21:32.291954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.292067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.419 [2024-11-20 09:21:32.292081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:37.419 [2024-11-20 09:21:32.292096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:37.419 [2024-11-20 09:21:32.292106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.419 [2024-11-20 09:21:32.292254] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:37.419 [2024-11-20 09:21:32.292270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:37.419 [2024-11-20 09:21:32.292285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:37.419 [2024-11-20 09:21:32.292320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:37.419 [2024-11-20 09:21:32.292357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:37.419 [2024-11-20 09:21:32.292380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:37.419 [2024-11-20 09:21:32.292390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:37.419 [2024-11-20 09:21:32.292403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:37.419 [2024-11-20 09:21:32.292413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:37.419 [2024-11-20 09:21:32.292426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:37.419 [2024-11-20 09:21:32.292437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:37.419 [2024-11-20 09:21:32.292462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:37.419 [2024-11-20 09:21:32.292501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:37.419 [2024-11-20 09:21:32.292535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:37.419 [2024-11-20 09:21:32.292571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:37.419 [2024-11-20 09:21:32.292605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.419 [2024-11-20 09:21:32.292628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:37.419 [2024-11-20 09:21:32.292643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:37.419 [2024-11-20 09:21:32.292683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:37.419 [2024-11-20 09:21:32.292694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:37.419 [2024-11-20 09:21:32.292706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:37.419 [2024-11-20 09:21:32.292717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:37.419 [2024-11-20 09:21:32.292730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:37.419 [2024-11-20 09:21:32.292740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.419 [2024-11-20 09:21:32.292753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:37.419 [2024-11-20 09:21:32.292763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:37.420 [2024-11-20 09:21:32.292776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.420 [2024-11-20 09:21:32.292786] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:37.420 [2024-11-20 09:21:32.292800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:37.420 [2024-11-20 09:21:32.292811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:37.420 [2024-11-20 09:21:32.292826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.420 [2024-11-20 09:21:32.292838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:37.420 [2024-11-20 09:21:32.292854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:37.420 [2024-11-20 09:21:32.292864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:37.420 [2024-11-20 09:21:32.292878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:37.420 [2024-11-20 09:21:32.292888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:37.420 [2024-11-20 09:21:32.292909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:37.420 [2024-11-20 09:21:32.292926] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:37.420 [2024-11-20 09:21:32.292943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.292956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:37.420 [2024-11-20 09:21:32.292971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:37.420 [2024-11-20 09:21:32.292982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:37.420 [2024-11-20 09:21:32.292996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:37.420 [2024-11-20 09:21:32.293008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:37.420 [2024-11-20 09:21:32.293022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:37.420 [2024-11-20 09:21:32.293034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:37.420 [2024-11-20 09:21:32.293048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:37.420 [2024-11-20 09:21:32.293059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:37.420 [2024-11-20 09:21:32.293076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.293088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.293102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.293113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.293127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:37.420 [2024-11-20 09:21:32.293139] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:37.420 [2024-11-20 09:21:32.293164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.293177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:37.420 [2024-11-20 09:21:32.293192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:37.420 [2024-11-20 09:21:32.293203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:37.420 [2024-11-20 09:21:32.293217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:37.420 [2024-11-20 09:21:32.293230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.420 [2024-11-20 09:21:32.293244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:37.420 [2024-11-20 09:21:32.293256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:24:37.420 [2024-11-20 09:21:32.293269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.420 [2024-11-20 09:21:32.293365] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:37.420 [2024-11-20 09:21:32.293388] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:39.946 [2024-11-20 09:21:35.049353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.946 [2024-11-20 09:21:35.049444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:39.946 [2024-11-20 09:21:35.049483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2755.999 ms 00:24:39.946 [2024-11-20 09:21:35.049502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.203 [2024-11-20 09:21:35.095921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.203 [2024-11-20 09:21:35.096006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:40.203 [2024-11-20 09:21:35.096033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.974 ms 00:24:40.203 [2024-11-20 09:21:35.096060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.203 [2024-11-20 09:21:35.096295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.203 [2024-11-20 09:21:35.096323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:40.203 [2024-11-20 09:21:35.096341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:24:40.203 [2024-11-20 09:21:35.096362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.203 [2024-11-20 09:21:35.169102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.204 [2024-11-20 09:21:35.169197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:40.204 [2024-11-20 09:21:35.169229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.650 ms 00:24:40.204 [2024-11-20 09:21:35.169255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.204 [2024-11-20 09:21:35.169482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.204 [2024-11-20 09:21:35.169516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:40.204 [2024-11-20 09:21:35.169539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:40.204 [2024-11-20 09:21:35.169560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.204 [2024-11-20 09:21:35.170314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.204 [2024-11-20 09:21:35.170372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:40.204 [2024-11-20 09:21:35.170395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:24:40.204 [2024-11-20 09:21:35.170417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.204 [2024-11-20 09:21:35.170650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.204 [2024-11-20 09:21:35.170690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:40.204 [2024-11-20 09:21:35.170707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:24:40.204 [2024-11-20 09:21:35.170726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.204 [2024-11-20 09:21:35.196778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.204 [2024-11-20 09:21:35.197094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:40.204 [2024-11-20 09:21:35.197134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.976 ms 00:24:40.204 [2024-11-20 09:21:35.197154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.204 [2024-11-20 09:21:35.215955] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:40.204 [2024-11-20 09:21:35.240550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.204 [2024-11-20 09:21:35.240643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:40.204 [2024-11-20 09:21:35.240693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.177 ms 00:24:40.204 [2024-11-20 09:21:35.240710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.330602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.330717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:40.486 [2024-11-20 09:21:35.330749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.705 ms 00:24:40.486 [2024-11-20 09:21:35.330765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.331151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.331177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:40.486 [2024-11-20 09:21:35.331202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:24:40.486 [2024-11-20 09:21:35.331217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.371159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.371246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:40.486 [2024-11-20 09:21:35.371276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.874 ms 00:24:40.486 [2024-11-20 09:21:35.371292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.410684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.410766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:40.486 [2024-11-20 09:21:35.410796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.181 ms 00:24:40.486 [2024-11-20 09:21:35.410811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.411983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.412025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:40.486 [2024-11-20 09:21:35.412048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:24:40.486 [2024-11-20 09:21:35.412063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.519155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.519261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:40.486 [2024-11-20 09:21:35.519457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.024 ms 00:24:40.486 [2024-11-20 09:21:35.519473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.561453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.561541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:40.486 [2024-11-20 09:21:35.561572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.741 ms 00:24:40.486 [2024-11-20 09:21:35.561589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.486 [2024-11-20 09:21:35.595603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.486 [2024-11-20 09:21:35.595692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:40.486 [2024-11-20 09:21:35.595718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.795 ms 00:24:40.486 [2024-11-20 09:21:35.595731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-11-20 09:21:35.627136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-11-20 09:21:35.627227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:40.745 [2024-11-20 09:21:35.627254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.262 ms 00:24:40.745 [2024-11-20 09:21:35.627286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-11-20 09:21:35.627421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-11-20 09:21:35.627446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:40.745 [2024-11-20 09:21:35.627466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:40.745 [2024-11-20 09:21:35.627478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-11-20 09:21:35.627584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.745 [2024-11-20 09:21:35.627600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:40.745 [2024-11-20 09:21:35.627620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:40.745 [2024-11-20 09:21:35.627632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.745 [2024-11-20 09:21:35.628877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:40.745 [2024-11-20 09:21:35.633211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3360.027 ms, result 0 00:24:40.745 [2024-11-20 09:21:35.634181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.745 { 00:24:40.745 "name": "ftl0", 00:24:40.745 "uuid": "6e0f82ac-54e0-4f88-a0e4-9e19270c421c" 00:24:40.745 } 00:24:40.745 09:21:35 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:24:40.745 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:24:40.745 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:40.745 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:24:40.745 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:40.745 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:40.745 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:41.004 09:21:35 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:24:41.263 [ 00:24:41.263 { 00:24:41.263 "name": "ftl0", 00:24:41.263 "aliases": [ 00:24:41.263 "6e0f82ac-54e0-4f88-a0e4-9e19270c421c" 00:24:41.263 ], 00:24:41.263 "product_name": "FTL disk", 00:24:41.263 "block_size": 4096, 00:24:41.263 "num_blocks": 23592960, 00:24:41.263 "uuid": "6e0f82ac-54e0-4f88-a0e4-9e19270c421c", 00:24:41.263 "assigned_rate_limits": { 00:24:41.263 "rw_ios_per_sec": 0, 00:24:41.263 "rw_mbytes_per_sec": 0, 00:24:41.263 "r_mbytes_per_sec": 0, 00:24:41.263 "w_mbytes_per_sec": 0 00:24:41.263 }, 00:24:41.263 "claimed": false, 00:24:41.263 "zoned": false, 00:24:41.263 "supported_io_types": { 00:24:41.263 "read": true, 00:24:41.263 "write": true, 00:24:41.263 "unmap": true, 00:24:41.263 "flush": true, 00:24:41.263 "reset": false, 00:24:41.263 "nvme_admin": false, 00:24:41.263 "nvme_io": false, 00:24:41.263 "nvme_io_md": false, 00:24:41.263 "write_zeroes": true, 00:24:41.263 "zcopy": false, 00:24:41.263 "get_zone_info": false, 00:24:41.263 "zone_management": false, 00:24:41.263 "zone_append": false, 00:24:41.263 "compare": false, 00:24:41.263 "compare_and_write": false, 00:24:41.263 "abort": false, 00:24:41.263 "seek_hole": false, 00:24:41.263 "seek_data": false, 00:24:41.263 "copy": false, 00:24:41.263 "nvme_iov_md": false 00:24:41.263 }, 00:24:41.263 "driver_specific": { 00:24:41.263 "ftl": { 00:24:41.263 "base_bdev": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:41.263 "cache": "nvc0n1p0" 00:24:41.263 } 00:24:41.263 } 00:24:41.263 } 00:24:41.263 ] 00:24:41.263 09:21:36 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:24:41.263 09:21:36 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:24:41.263 09:21:36 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:41.521 09:21:36 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:24:41.521 09:21:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:24:41.781 09:21:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:24:41.781 { 00:24:41.781 "name": "ftl0", 00:24:41.781 "aliases": [ 00:24:41.781 "6e0f82ac-54e0-4f88-a0e4-9e19270c421c" 00:24:41.781 ], 00:24:41.781 "product_name": "FTL disk", 00:24:41.781 "block_size": 4096, 00:24:41.781 "num_blocks": 23592960, 00:24:41.781 "uuid": "6e0f82ac-54e0-4f88-a0e4-9e19270c421c", 00:24:41.781 "assigned_rate_limits": { 00:24:41.781 "rw_ios_per_sec": 0, 00:24:41.781 "rw_mbytes_per_sec": 0, 00:24:41.781 "r_mbytes_per_sec": 0, 00:24:41.781 "w_mbytes_per_sec": 0 00:24:41.781 }, 00:24:41.781 "claimed": false, 00:24:41.781 "zoned": false, 00:24:41.781 "supported_io_types": { 00:24:41.781 "read": true, 00:24:41.781 "write": true, 00:24:41.781 "unmap": true, 00:24:41.781 "flush": true, 00:24:41.781 "reset": false, 00:24:41.781 "nvme_admin": false, 00:24:41.781 "nvme_io": false, 00:24:41.781 "nvme_io_md": false, 00:24:41.781 "write_zeroes": true, 00:24:41.781 "zcopy": false, 00:24:41.781 "get_zone_info": false, 00:24:41.781 "zone_management": false, 00:24:41.781 "zone_append": false, 00:24:41.781 "compare": false, 00:24:41.781 "compare_and_write": false, 00:24:41.781 "abort": false, 00:24:41.781 "seek_hole": false, 00:24:41.781 "seek_data": false, 00:24:41.781 "copy": false, 00:24:41.781 "nvme_iov_md": false 00:24:41.781 }, 00:24:41.781 "driver_specific": { 00:24:41.781 "ftl": { 00:24:41.781 "base_bdev": "7b97a44e-8bae-4b13-81ac-89c2769dc7f8", 00:24:41.781 "cache": "nvc0n1p0" 00:24:41.781 } 00:24:41.781 } 00:24:41.781 } 00:24:41.781 ]' 00:24:41.781 09:21:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:24:42.039 09:21:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:24:42.039 09:21:36 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:42.298 [2024-11-20 09:21:37.202450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.202721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:42.298 [2024-11-20 09:21:37.202861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:42.298 [2024-11-20 09:21:37.203001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.203084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:42.298 [2024-11-20 09:21:37.206769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.206806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:42.298 [2024-11-20 09:21:37.206831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:24:42.298 [2024-11-20 09:21:37.206844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.207427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.207452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:42.298 [2024-11-20 09:21:37.207470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:24:42.298 [2024-11-20 09:21:37.207482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.211134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.211174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:42.298 [2024-11-20 09:21:37.211193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.613 ms 00:24:42.298 [2024-11-20 09:21:37.211205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.218546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.218584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:42.298 [2024-11-20 09:21:37.218603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.258 ms 00:24:42.298 [2024-11-20 09:21:37.218614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.250183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.250249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:42.298 [2024-11-20 09:21:37.250276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.458 ms 00:24:42.298 [2024-11-20 09:21:37.250288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.269287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.269362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:42.298 [2024-11-20 09:21:37.269386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.856 ms 00:24:42.298 [2024-11-20 09:21:37.269403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.269718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.269742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:42.298 [2024-11-20 09:21:37.269760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:24:42.298 [2024-11-20 09:21:37.269772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.301811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.301887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:42.298 [2024-11-20 09:21:37.301912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.993 ms 00:24:42.298 [2024-11-20 09:21:37.301925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.333049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.333126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:42.298 [2024-11-20 09:21:37.333156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.974 ms 00:24:42.298 [2024-11-20 09:21:37.333168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.363740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.363999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:42.298 [2024-11-20 09:21:37.364038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.431 ms 00:24:42.298 [2024-11-20 09:21:37.364053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.394638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.298 [2024-11-20 09:21:37.394923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:42.298 [2024-11-20 09:21:37.394961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.376 ms 00:24:42.298 [2024-11-20 09:21:37.394975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.298 [2024-11-20 09:21:37.395150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:42.298 [2024-11-20 09:21:37.395178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:42.298 [2024-11-20 09:21:37.395197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:42.298 [2024-11-20 09:21:37.395210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:42.298 [2024-11-20 09:21:37.395232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:42.298 [2024-11-20 09:21:37.395246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:42.298 [2024-11-20 09:21:37.395264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:42.298 [2024-11-20 09:21:37.395276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.395998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:42.299 [2024-11-20 09:21:37.396134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:42.300 [2024-11-20 09:21:37.396636] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:42.300 [2024-11-20 09:21:37.396667] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:24:42.300 [2024-11-20 09:21:37.396681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:42.300 [2024-11-20 09:21:37.396695] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:42.300 [2024-11-20 09:21:37.396706] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:42.300 [2024-11-20 09:21:37.396720] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:42.300 [2024-11-20 09:21:37.396736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:42.300 [2024-11-20 09:21:37.396750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:42.300 [2024-11-20 09:21:37.396761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:42.300 [2024-11-20 09:21:37.396774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:42.300 [2024-11-20 09:21:37.396785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:42.300 [2024-11-20 09:21:37.396799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.300 [2024-11-20 09:21:37.396810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:42.300 [2024-11-20 09:21:37.396826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:24:42.300 [2024-11-20 09:21:37.396837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.300 [2024-11-20 09:21:37.414196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.558 [2024-11-20 09:21:37.414436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:42.558 [2024-11-20 09:21:37.414483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.308 ms 00:24:42.558 [2024-11-20 09:21:37.414496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.558 [2024-11-20 09:21:37.415094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.558 [2024-11-20 09:21:37.415122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:42.558 [2024-11-20 09:21:37.415141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:24:42.558 [2024-11-20 09:21:37.415153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.558 [2024-11-20 09:21:37.474881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.558 [2024-11-20 09:21:37.474962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:42.558 [2024-11-20 09:21:37.474985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.558 [2024-11-20 09:21:37.474998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.558 [2024-11-20 09:21:37.475168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.558 [2024-11-20 09:21:37.475186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:42.558 [2024-11-20 09:21:37.475202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.558 [2024-11-20 09:21:37.475215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.558 [2024-11-20 09:21:37.475312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.558 [2024-11-20 09:21:37.475332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:42.558 [2024-11-20 09:21:37.475354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.558 [2024-11-20 09:21:37.475366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.558 [2024-11-20 09:21:37.475407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.558 [2024-11-20 09:21:37.475420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:42.558 [2024-11-20 09:21:37.475435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.558 [2024-11-20 09:21:37.475446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.558 [2024-11-20 09:21:37.593548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.558 [2024-11-20 09:21:37.593783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:42.558 [2024-11-20 09:21:37.593821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.558 [2024-11-20 09:21:37.593835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.681530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.681751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.815 [2024-11-20 09:21:37.681789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.681804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.681933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.681952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.815 [2024-11-20 09:21:37.681998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.682015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.682086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.682100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.815 [2024-11-20 09:21:37.682114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.682126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.682298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.682317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.815 [2024-11-20 09:21:37.682333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.682345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.682437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.682457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:42.815 [2024-11-20 09:21:37.682473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.682484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.682549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.682564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.815 [2024-11-20 09:21:37.682582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.682593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.682699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.815 [2024-11-20 09:21:37.682719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.815 [2024-11-20 09:21:37.682735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.815 [2024-11-20 09:21:37.682747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.815 [2024-11-20 09:21:37.682972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 480.502 ms, result 0 00:24:42.815 true 00:24:42.815 09:21:37 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78234 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78234 ']' 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78234 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78234 00:24:42.815 killing process with pid 78234 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78234' 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78234 00:24:42.815 09:21:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78234 00:24:48.083 09:21:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:24:48.650 65536+0 records in 00:24:48.650 65536+0 records out 00:24:48.650 268435456 bytes (268 MB, 256 MiB) copied, 1.18731 s, 226 MB/s 00:24:48.650 09:21:43 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:48.650 [2024-11-20 09:21:43.688984] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:24:48.650 [2024-11-20 09:21:43.689995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78452 ] 00:24:48.908 [2024-11-20 09:21:43.864411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.908 [2024-11-20 09:21:43.997929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.476 [2024-11-20 09:21:44.358505] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.476 [2024-11-20 09:21:44.358916] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.476 [2024-11-20 09:21:44.527074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.527409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.476 [2024-11-20 09:21:44.527586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:49.476 [2024-11-20 09:21:44.527665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.531501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.531689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.476 [2024-11-20 09:21:44.531828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.593 ms 00:24:49.476 [2024-11-20 09:21:44.531889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.532249] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.476 [2024-11-20 09:21:44.533369] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.476 [2024-11-20 09:21:44.533573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.533730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.476 [2024-11-20 09:21:44.533793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:24:49.476 [2024-11-20 09:21:44.533951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.536180] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.476 [2024-11-20 09:21:44.553284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.553525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.476 [2024-11-20 09:21:44.553683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.105 ms 00:24:49.476 [2024-11-20 09:21:44.553749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.554029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.554180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.476 [2024-11-20 09:21:44.554301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:49.476 [2024-11-20 09:21:44.554449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.563435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.563711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.476 [2024-11-20 09:21:44.563748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.854 ms 00:24:49.476 [2024-11-20 09:21:44.563766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.563956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.563981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.476 [2024-11-20 09:21:44.563998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:49.476 [2024-11-20 09:21:44.564013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.564096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.564126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.476 [2024-11-20 09:21:44.564142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:49.476 [2024-11-20 09:21:44.564156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.564204] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:49.476 [2024-11-20 09:21:44.569305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.569354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.476 [2024-11-20 09:21:44.569373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.113 ms 00:24:49.476 [2024-11-20 09:21:44.569387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.569481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.569504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.476 [2024-11-20 09:21:44.569519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:49.476 [2024-11-20 09:21:44.569533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.569571] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.476 [2024-11-20 09:21:44.569614] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:49.476 [2024-11-20 09:21:44.569691] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.476 [2024-11-20 09:21:44.569722] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:49.476 [2024-11-20 09:21:44.569838] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.476 [2024-11-20 09:21:44.569857] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.476 [2024-11-20 09:21:44.569875] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:49.476 [2024-11-20 09:21:44.569893] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.476 [2024-11-20 09:21:44.569918] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.476 [2024-11-20 09:21:44.569933] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:49.476 [2024-11-20 09:21:44.569947] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.476 [2024-11-20 09:21:44.569961] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.476 [2024-11-20 09:21:44.569975] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.476 [2024-11-20 09:21:44.569990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.570004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.476 [2024-11-20 09:21:44.570019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:24:49.476 [2024-11-20 09:21:44.570033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.570151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.476 [2024-11-20 09:21:44.570174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.476 [2024-11-20 09:21:44.570197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:49.476 [2024-11-20 09:21:44.570212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.476 [2024-11-20 09:21:44.570342] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.476 [2024-11-20 09:21:44.570365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.476 [2024-11-20 09:21:44.570380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.476 [2024-11-20 09:21:44.570395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.476 [2024-11-20 09:21:44.570410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.476 [2024-11-20 09:21:44.570423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:49.477 [2024-11-20 09:21:44.570451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.477 [2024-11-20 09:21:44.570467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.477 [2024-11-20 09:21:44.570493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.477 [2024-11-20 09:21:44.570506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:49.477 [2024-11-20 09:21:44.570518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.477 [2024-11-20 09:21:44.570549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.477 [2024-11-20 09:21:44.570563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:49.477 [2024-11-20 09:21:44.570576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.477 [2024-11-20 09:21:44.570602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:49.477 [2024-11-20 09:21:44.570617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.477 [2024-11-20 09:21:44.570644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.477 [2024-11-20 09:21:44.570693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.477 [2024-11-20 09:21:44.570707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.477 [2024-11-20 09:21:44.570734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.477 [2024-11-20 09:21:44.570747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.477 [2024-11-20 09:21:44.570773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.477 [2024-11-20 09:21:44.570786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.477 [2024-11-20 09:21:44.570812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.477 [2024-11-20 09:21:44.570825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.477 [2024-11-20 09:21:44.570851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.477 [2024-11-20 09:21:44.570863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:49.477 [2024-11-20 09:21:44.570876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.477 [2024-11-20 09:21:44.570889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.477 [2024-11-20 09:21:44.570902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:49.477 [2024-11-20 09:21:44.570914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.477 [2024-11-20 09:21:44.570940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:49.477 [2024-11-20 09:21:44.570955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.477 [2024-11-20 09:21:44.570968] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.477 [2024-11-20 09:21:44.570983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.477 [2024-11-20 09:21:44.570998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.477 [2024-11-20 09:21:44.571019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.477 [2024-11-20 09:21:44.571033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.477 [2024-11-20 09:21:44.571047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.477 [2024-11-20 09:21:44.571060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.477 [2024-11-20 09:21:44.571076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.477 [2024-11-20 09:21:44.571089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.477 [2024-11-20 09:21:44.571102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.477 [2024-11-20 09:21:44.571118] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.477 [2024-11-20 09:21:44.571134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:49.477 [2024-11-20 09:21:44.571164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:49.477 [2024-11-20 09:21:44.571178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:49.477 [2024-11-20 09:21:44.571191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:49.477 [2024-11-20 09:21:44.571204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:49.477 [2024-11-20 09:21:44.571218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:49.477 [2024-11-20 09:21:44.571233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:49.477 [2024-11-20 09:21:44.571246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:49.477 [2024-11-20 09:21:44.571260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:49.477 [2024-11-20 09:21:44.571274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:49.477 [2024-11-20 09:21:44.571343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.477 [2024-11-20 09:21:44.571359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.477 [2024-11-20 09:21:44.571391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.477 [2024-11-20 09:21:44.571405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.477 [2024-11-20 09:21:44.571419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.477 [2024-11-20 09:21:44.571434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.477 [2024-11-20 09:21:44.571448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.477 [2024-11-20 09:21:44.571469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.161 ms 00:24:49.477 [2024-11-20 09:21:44.571484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.612202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.612284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.736 [2024-11-20 09:21:44.612310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.629 ms 00:24:49.736 [2024-11-20 09:21:44.612333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.612564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.612597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.736 [2024-11-20 09:21:44.612614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:24:49.736 [2024-11-20 09:21:44.612629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.670851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.671165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.736 [2024-11-20 09:21:44.671202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.154 ms 00:24:49.736 [2024-11-20 09:21:44.671230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.671429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.671454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.736 [2024-11-20 09:21:44.671471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.736 [2024-11-20 09:21:44.671486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.672117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.672147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.736 [2024-11-20 09:21:44.672164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:24:49.736 [2024-11-20 09:21:44.672189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.672386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.672410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.736 [2024-11-20 09:21:44.672426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:24:49.736 [2024-11-20 09:21:44.672440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.692991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.693068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.736 [2024-11-20 09:21:44.693094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.509 ms 00:24:49.736 [2024-11-20 09:21:44.693109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.710246] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:49.736 [2024-11-20 09:21:44.710503] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:49.736 [2024-11-20 09:21:44.710536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.710562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:49.736 [2024-11-20 09:21:44.710583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.214 ms 00:24:49.736 [2024-11-20 09:21:44.710598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.740699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.741030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:49.736 [2024-11-20 09:21:44.741087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.932 ms 00:24:49.736 [2024-11-20 09:21:44.741103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.758066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.758144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:49.736 [2024-11-20 09:21:44.758171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.800 ms 00:24:49.736 [2024-11-20 09:21:44.758187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.774410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.774487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:49.736 [2024-11-20 09:21:44.774510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.051 ms 00:24:49.736 [2024-11-20 09:21:44.774524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.736 [2024-11-20 09:21:44.775615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.736 [2024-11-20 09:21:44.775687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.736 [2024-11-20 09:21:44.775709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:24:49.736 [2024-11-20 09:21:44.775723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.855882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.855964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:49.995 [2024-11-20 09:21:44.855990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.115 ms 00:24:49.995 [2024-11-20 09:21:44.856006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.870970] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:49.995 [2024-11-20 09:21:44.893167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.893256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.995 [2024-11-20 09:21:44.893282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.974 ms 00:24:49.995 [2024-11-20 09:21:44.893297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.893498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.893528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:49.995 [2024-11-20 09:21:44.893545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:49.995 [2024-11-20 09:21:44.893559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.893644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.893701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.995 [2024-11-20 09:21:44.893718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:49.995 [2024-11-20 09:21:44.893732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.893784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.893805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.995 [2024-11-20 09:21:44.893826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:49.995 [2024-11-20 09:21:44.893840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.893897] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:49.995 [2024-11-20 09:21:44.893919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.893933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:49.995 [2024-11-20 09:21:44.893948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:49.995 [2024-11-20 09:21:44.893962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.926415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.926728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:49.995 [2024-11-20 09:21:44.926767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.411 ms 00:24:49.995 [2024-11-20 09:21:44.926784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.926993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.995 [2024-11-20 09:21:44.927018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:49.995 [2024-11-20 09:21:44.927034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:49.995 [2024-11-20 09:21:44.927049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.995 [2024-11-20 09:21:44.928365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.995 [2024-11-20 09:21:44.933040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.871 ms, result 0 00:24:49.995 [2024-11-20 09:21:44.933956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.995 [2024-11-20 09:21:44.950731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:50.929  [2024-11-20T09:21:46.985Z] Copying: 23/256 [MB] (23 MBps) [2024-11-20T09:21:48.363Z] Copying: 48/256 [MB] (25 MBps) [2024-11-20T09:21:49.317Z] Copying: 74/256 [MB] (26 MBps) [2024-11-20T09:21:50.274Z] Copying: 100/256 [MB] (26 MBps) [2024-11-20T09:21:51.209Z] Copying: 123/256 [MB] (22 MBps) [2024-11-20T09:21:52.143Z] Copying: 148/256 [MB] (24 MBps) [2024-11-20T09:21:53.077Z] Copying: 173/256 [MB] (25 MBps) [2024-11-20T09:21:54.010Z] Copying: 199/256 [MB] (25 MBps) [2024-11-20T09:21:55.387Z] Copying: 226/256 [MB] (27 MBps) [2024-11-20T09:21:55.387Z] Copying: 254/256 [MB] (27 MBps) [2024-11-20T09:21:55.387Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-20 09:21:55.034215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.267 [2024-11-20 09:21:55.047789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.047885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.267 [2024-11-20 09:21:55.047915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:00.267 [2024-11-20 09:21:55.047928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.047967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:00.267 [2024-11-20 09:21:55.051749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.051816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.267 [2024-11-20 09:21:55.051835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:25:00.267 [2024-11-20 09:21:55.051847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.053524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.053572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.267 [2024-11-20 09:21:55.053591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.623 ms 00:25:00.267 [2024-11-20 09:21:55.053603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.060836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.060923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:00.267 [2024-11-20 09:21:55.060955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.198 ms 00:25:00.267 [2024-11-20 09:21:55.060968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.068456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.068545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:00.267 [2024-11-20 09:21:55.068565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.349 ms 00:25:00.267 [2024-11-20 09:21:55.068578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.124173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.124301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:00.267 [2024-11-20 09:21:55.124338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.458 ms 00:25:00.267 [2024-11-20 09:21:55.124360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.155372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.155826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:00.267 [2024-11-20 09:21:55.155892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.814 ms 00:25:00.267 [2024-11-20 09:21:55.155922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.156196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.156227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:00.267 [2024-11-20 09:21:55.156249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:25:00.267 [2024-11-20 09:21:55.156266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.192092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.192189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:00.267 [2024-11-20 09:21:55.192211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.788 ms 00:25:00.267 [2024-11-20 09:21:55.192223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.227514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.227868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:00.267 [2024-11-20 09:21:55.227903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.157 ms 00:25:00.267 [2024-11-20 09:21:55.227917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.262220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.262310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:00.267 [2024-11-20 09:21:55.262332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.163 ms 00:25:00.267 [2024-11-20 09:21:55.262345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.297343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.267 [2024-11-20 09:21:55.297428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:00.267 [2024-11-20 09:21:55.297449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.818 ms 00:25:00.267 [2024-11-20 09:21:55.297460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.267 [2024-11-20 09:21:55.297593] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:00.267 [2024-11-20 09:21:55.297634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.297898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.297976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:00.267 [2024-11-20 09:21:55.298530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.298989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:00.268 [2024-11-20 09:21:55.299272] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:00.268 [2024-11-20 09:21:55.299284] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:25:00.268 [2024-11-20 09:21:55.299297] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:00.268 [2024-11-20 09:21:55.299307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:00.268 [2024-11-20 09:21:55.299319] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:00.268 [2024-11-20 09:21:55.299332] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:00.268 [2024-11-20 09:21:55.299343] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:00.268 [2024-11-20 09:21:55.299355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:00.268 [2024-11-20 09:21:55.299366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:00.268 [2024-11-20 09:21:55.299376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:00.268 [2024-11-20 09:21:55.299386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:00.268 [2024-11-20 09:21:55.299400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.268 [2024-11-20 09:21:55.299412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:00.268 [2024-11-20 09:21:55.299602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:25:00.268 [2024-11-20 09:21:55.299615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.268 [2024-11-20 09:21:55.318303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.268 [2024-11-20 09:21:55.318544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:00.268 [2024-11-20 09:21:55.318680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.301 ms 00:25:00.268 [2024-11-20 09:21:55.318735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.268 [2024-11-20 09:21:55.319397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.268 [2024-11-20 09:21:55.319529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:00.268 [2024-11-20 09:21:55.319632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:25:00.268 [2024-11-20 09:21:55.319748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.268 [2024-11-20 09:21:55.367946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.268 [2024-11-20 09:21:55.368299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.268 [2024-11-20 09:21:55.368414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.268 [2024-11-20 09:21:55.368532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.268 [2024-11-20 09:21:55.368739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.268 [2024-11-20 09:21:55.368860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.268 [2024-11-20 09:21:55.368961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.268 [2024-11-20 09:21:55.369058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.268 [2024-11-20 09:21:55.369188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.268 [2024-11-20 09:21:55.369239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.268 [2024-11-20 09:21:55.369354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.268 [2024-11-20 09:21:55.369403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.268 [2024-11-20 09:21:55.369439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.268 [2024-11-20 09:21:55.369454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.268 [2024-11-20 09:21:55.369474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.268 [2024-11-20 09:21:55.369487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.484514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.484583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.527 [2024-11-20 09:21:55.484603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.484615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.580683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.580794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.527 [2024-11-20 09:21:55.580841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.580854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.580989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.581009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.527 [2024-11-20 09:21:55.581023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.581035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.581078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.581092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.527 [2024-11-20 09:21:55.581105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.581124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.581277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.581297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.527 [2024-11-20 09:21:55.581310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.581322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.581379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.581397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:00.527 [2024-11-20 09:21:55.581411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.581422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.581500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.581517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.527 [2024-11-20 09:21:55.581530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.581543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.581619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.527 [2024-11-20 09:21:55.581636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.527 [2024-11-20 09:21:55.581666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.527 [2024-11-20 09:21:55.581686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.527 [2024-11-20 09:21:55.581898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.108 ms, result 0 00:25:01.900 00:25:01.900 00:25:01.900 09:21:56 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78584 00:25:01.900 09:21:56 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78584 00:25:01.900 09:21:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78584 ']' 00:25:01.900 09:21:56 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.900 09:21:56 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:01.900 09:21:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.900 09:21:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.900 09:21:56 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.900 09:21:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:01.900 [2024-11-20 09:21:56.909367] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:01.900 [2024-11-20 09:21:56.909531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78584 ] 00:25:02.157 [2024-11-20 09:21:57.085074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.157 [2024-11-20 09:21:57.220373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.090 09:21:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.090 09:21:58 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:03.090 09:21:58 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:03.656 [2024-11-20 09:21:58.475364] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.656 [2024-11-20 09:21:58.475492] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.656 [2024-11-20 09:21:58.664195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.664290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:03.656 [2024-11-20 09:21:58.664320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:03.656 [2024-11-20 09:21:58.664334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.668998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.669064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.656 [2024-11-20 09:21:58.669088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.631 ms 00:25:03.656 [2024-11-20 09:21:58.669101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.669321] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:03.656 [2024-11-20 09:21:58.670339] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:03.656 [2024-11-20 09:21:58.670531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.670553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.656 [2024-11-20 09:21:58.670570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:25:03.656 [2024-11-20 09:21:58.670582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.672912] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:03.656 [2024-11-20 09:21:58.691347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.691483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:03.656 [2024-11-20 09:21:58.691509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.437 ms 00:25:03.656 [2024-11-20 09:21:58.691531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.691840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.691875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:03.656 [2024-11-20 09:21:58.691891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:03.656 [2024-11-20 09:21:58.691911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.701886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.702314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.656 [2024-11-20 09:21:58.702349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.878 ms 00:25:03.656 [2024-11-20 09:21:58.702379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.702705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.702744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.656 [2024-11-20 09:21:58.702762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:25:03.656 [2024-11-20 09:21:58.702781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.702857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.702881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:03.656 [2024-11-20 09:21:58.702896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:03.656 [2024-11-20 09:21:58.702915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.702956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:03.656 [2024-11-20 09:21:58.708344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.708410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.656 [2024-11-20 09:21:58.708436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:25:03.656 [2024-11-20 09:21:58.708450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.708595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.656 [2024-11-20 09:21:58.708615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:03.656 [2024-11-20 09:21:58.708635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:03.656 [2024-11-20 09:21:58.708683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.656 [2024-11-20 09:21:58.708745] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:03.656 [2024-11-20 09:21:58.708783] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:03.656 [2024-11-20 09:21:58.708850] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:03.656 [2024-11-20 09:21:58.708876] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:03.656 [2024-11-20 09:21:58.708997] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:03.656 [2024-11-20 09:21:58.709015] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:03.657 [2024-11-20 09:21:58.709039] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:03.657 [2024-11-20 09:21:58.709062] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709082] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:03.657 [2024-11-20 09:21:58.709114] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:03.657 [2024-11-20 09:21:58.709127] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:03.657 [2024-11-20 09:21:58.709149] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:03.657 [2024-11-20 09:21:58.709162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.657 [2024-11-20 09:21:58.709180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:03.657 [2024-11-20 09:21:58.709193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:25:03.657 [2024-11-20 09:21:58.709211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.657 [2024-11-20 09:21:58.709319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.657 [2024-11-20 09:21:58.709341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:03.657 [2024-11-20 09:21:58.709419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:03.657 [2024-11-20 09:21:58.709449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.657 [2024-11-20 09:21:58.709577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:03.657 [2024-11-20 09:21:58.709603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:03.657 [2024-11-20 09:21:58.709619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:03.657 [2024-11-20 09:21:58.709682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:03.657 [2024-11-20 09:21:58.709724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.657 [2024-11-20 09:21:58.709749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:03.657 [2024-11-20 09:21:58.709764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:03.657 [2024-11-20 09:21:58.709776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.657 [2024-11-20 09:21:58.709789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:03.657 [2024-11-20 09:21:58.709800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:03.657 [2024-11-20 09:21:58.709814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:03.657 [2024-11-20 09:21:58.709838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:03.657 [2024-11-20 09:21:58.709887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:03.657 [2024-11-20 09:21:58.709929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:03.657 [2024-11-20 09:21:58.709965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:03.657 [2024-11-20 09:21:58.709978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.657 [2024-11-20 09:21:58.709989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:03.657 [2024-11-20 09:21:58.710003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:03.657 [2024-11-20 09:21:58.710013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.657 [2024-11-20 09:21:58.710032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:03.657 [2024-11-20 09:21:58.710045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:03.657 [2024-11-20 09:21:58.710063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.657 [2024-11-20 09:21:58.710075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:03.657 [2024-11-20 09:21:58.710093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:03.657 [2024-11-20 09:21:58.710105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.657 [2024-11-20 09:21:58.710122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:03.657 [2024-11-20 09:21:58.710134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:03.657 [2024-11-20 09:21:58.710172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.710185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:03.657 [2024-11-20 09:21:58.710202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:03.657 [2024-11-20 09:21:58.710215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.710232] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:03.657 [2024-11-20 09:21:58.710245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:03.657 [2024-11-20 09:21:58.710269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.657 [2024-11-20 09:21:58.710282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.657 [2024-11-20 09:21:58.710314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:03.657 [2024-11-20 09:21:58.710327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:03.657 [2024-11-20 09:21:58.710344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:03.657 [2024-11-20 09:21:58.710356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:03.657 [2024-11-20 09:21:58.710372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:03.657 [2024-11-20 09:21:58.710385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:03.657 [2024-11-20 09:21:58.710404] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:03.657 [2024-11-20 09:21:58.710420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.657 [2024-11-20 09:21:58.710447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:03.657 [2024-11-20 09:21:58.710470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:03.657 [2024-11-20 09:21:58.710487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:03.657 [2024-11-20 09:21:58.710501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:03.657 [2024-11-20 09:21:58.710518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:03.657 [2024-11-20 09:21:58.710532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:03.657 [2024-11-20 09:21:58.710549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:03.657 [2024-11-20 09:21:58.710562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:03.657 [2024-11-20 09:21:58.710580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:03.657 [2024-11-20 09:21:58.710593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:03.657 [2024-11-20 09:21:58.710610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:03.657 [2024-11-20 09:21:58.710624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:03.657 [2024-11-20 09:21:58.710641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:03.657 [2024-11-20 09:21:58.710677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:03.657 [2024-11-20 09:21:58.710711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:03.658 [2024-11-20 09:21:58.710738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.658 [2024-11-20 09:21:58.710777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:03.658 [2024-11-20 09:21:58.710800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:03.658 [2024-11-20 09:21:58.710831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:03.658 [2024-11-20 09:21:58.710855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:03.658 [2024-11-20 09:21:58.710888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.658 [2024-11-20 09:21:58.710912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:03.658 [2024-11-20 09:21:58.710945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:25:03.658 [2024-11-20 09:21:58.710970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.658 [2024-11-20 09:21:58.755289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.658 [2024-11-20 09:21:58.755378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.658 [2024-11-20 09:21:58.755410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.161 ms 00:25:03.658 [2024-11-20 09:21:58.755425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.658 [2024-11-20 09:21:58.755757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.658 [2024-11-20 09:21:58.755783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:03.658 [2024-11-20 09:21:58.755807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:03.658 [2024-11-20 09:21:58.755822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.805381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.805477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.916 [2024-11-20 09:21:58.805507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.507 ms 00:25:03.916 [2024-11-20 09:21:58.805522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.805734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.805760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.916 [2024-11-20 09:21:58.805783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:03.916 [2024-11-20 09:21:58.805796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.806494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.806522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.916 [2024-11-20 09:21:58.806551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:25:03.916 [2024-11-20 09:21:58.806565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.806769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.806789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.916 [2024-11-20 09:21:58.806808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:25:03.916 [2024-11-20 09:21:58.806821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.830785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.830874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.916 [2024-11-20 09:21:58.830900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.917 ms 00:25:03.916 [2024-11-20 09:21:58.830914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.849890] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:03.916 [2024-11-20 09:21:58.850006] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:03.916 [2024-11-20 09:21:58.850045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.850061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:03.916 [2024-11-20 09:21:58.850086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.892 ms 00:25:03.916 [2024-11-20 09:21:58.850100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.882554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.882697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:03.916 [2024-11-20 09:21:58.882744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.159 ms 00:25:03.916 [2024-11-20 09:21:58.882766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.901357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.901463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:03.916 [2024-11-20 09:21:58.901501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.348 ms 00:25:03.916 [2024-11-20 09:21:58.901525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.919783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.920154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:03.916 [2024-11-20 09:21:58.920202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.991 ms 00:25:03.916 [2024-11-20 09:21:58.920218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.916 [2024-11-20 09:21:58.921440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.916 [2024-11-20 09:21:58.921483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:03.917 [2024-11-20 09:21:58.921509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:25:03.917 [2024-11-20 09:21:58.921524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.917 [2024-11-20 09:21:59.020306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.917 [2024-11-20 09:21:59.020405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:03.917 [2024-11-20 09:21:59.020437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.721 ms 00:25:03.917 [2024-11-20 09:21:59.020452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.178 [2024-11-20 09:21:59.037879] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:04.178 [2024-11-20 09:21:59.062782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.178 [2024-11-20 09:21:59.062878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:04.178 [2024-11-20 09:21:59.062909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.082 ms 00:25:04.178 [2024-11-20 09:21:59.062929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.178 [2024-11-20 09:21:59.063153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.178 [2024-11-20 09:21:59.063182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:04.178 [2024-11-20 09:21:59.063197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:04.178 [2024-11-20 09:21:59.063216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.178 [2024-11-20 09:21:59.063306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.178 [2024-11-20 09:21:59.063331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:04.178 [2024-11-20 09:21:59.063348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:04.178 [2024-11-20 09:21:59.063367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.178 [2024-11-20 09:21:59.063411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.178 [2024-11-20 09:21:59.063433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:04.179 [2024-11-20 09:21:59.063447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:04.179 [2024-11-20 09:21:59.063464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.179 [2024-11-20 09:21:59.063520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:04.179 [2024-11-20 09:21:59.063549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.179 [2024-11-20 09:21:59.063562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:04.179 [2024-11-20 09:21:59.063591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:04.179 [2024-11-20 09:21:59.063604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.179 [2024-11-20 09:21:59.099197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.179 [2024-11-20 09:21:59.099297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:04.179 [2024-11-20 09:21:59.099330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.515 ms 00:25:04.179 [2024-11-20 09:21:59.099345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.179 [2024-11-20 09:21:59.099627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.179 [2024-11-20 09:21:59.099668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:04.179 [2024-11-20 09:21:59.099702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:04.179 [2024-11-20 09:21:59.099730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.179 [2024-11-20 09:21:59.101322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:04.179 [2024-11-20 09:21:59.107116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 436.662 ms, result 0 00:25:04.179 [2024-11-20 09:21:59.108562] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:04.179 Some configs were skipped because the RPC state that can call them passed over. 00:25:04.179 09:21:59 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:04.439 [2024-11-20 09:21:59.455850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.439 [2024-11-20 09:21:59.456226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:04.439 [2024-11-20 09:21:59.456374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:25:04.439 [2024-11-20 09:21:59.456440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.439 [2024-11-20 09:21:59.456628] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.323 ms, result 0 00:25:04.439 true 00:25:04.440 09:21:59 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:04.697 [2024-11-20 09:21:59.783804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:04.697 [2024-11-20 09:21:59.784149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:04.697 [2024-11-20 09:21:59.784296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:25:04.697 [2024-11-20 09:21:59.784426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:04.697 [2024-11-20 09:21:59.784551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.743 ms, result 0 00:25:04.697 true 00:25:04.697 09:21:59 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78584 00:25:04.697 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78584 ']' 00:25:04.697 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78584 00:25:04.697 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:04.697 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.697 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78584 00:25:04.954 killing process with pid 78584 00:25:04.954 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.954 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.954 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78584' 00:25:04.954 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78584 00:25:04.954 09:21:59 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78584 00:25:05.888 [2024-11-20 09:22:00.926560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.926683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:05.888 [2024-11-20 09:22:00.926709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:05.888 [2024-11-20 09:22:00.926743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.926802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:05.888 [2024-11-20 09:22:00.930799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.930856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:05.888 [2024-11-20 09:22:00.930881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.955 ms 00:25:05.888 [2024-11-20 09:22:00.930894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.931377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.931409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:05.888 [2024-11-20 09:22:00.931428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:25:05.888 [2024-11-20 09:22:00.931441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.935764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.935834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:05.888 [2024-11-20 09:22:00.935862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.282 ms 00:25:05.888 [2024-11-20 09:22:00.935874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.943614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.943733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:05.888 [2024-11-20 09:22:00.943759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.658 ms 00:25:05.888 [2024-11-20 09:22:00.943774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.957594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.957703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:05.888 [2024-11-20 09:22:00.957738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.622 ms 00:25:05.888 [2024-11-20 09:22:00.957772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.967399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.967510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:05.888 [2024-11-20 09:22:00.967543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.445 ms 00:25:05.888 [2024-11-20 09:22:00.967559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.967856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.967886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:05.888 [2024-11-20 09:22:00.967905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:25:05.888 [2024-11-20 09:22:00.967917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.981798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.981897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:05.888 [2024-11-20 09:22:00.981929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.815 ms 00:25:05.888 [2024-11-20 09:22:00.981943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:05.888 [2024-11-20 09:22:00.995281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:05.888 [2024-11-20 09:22:00.995387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:05.888 [2024-11-20 09:22:00.995439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.187 ms 00:25:05.888 [2024-11-20 09:22:00.995456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.147 [2024-11-20 09:22:01.008714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.147 [2024-11-20 09:22:01.009099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:06.147 [2024-11-20 09:22:01.009153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.085 ms 00:25:06.147 [2024-11-20 09:22:01.009170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.147 [2024-11-20 09:22:01.022464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.147 [2024-11-20 09:22:01.022555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:06.147 [2024-11-20 09:22:01.022584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.090 ms 00:25:06.147 [2024-11-20 09:22:01.022599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.147 [2024-11-20 09:22:01.022771] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:06.147 [2024-11-20 09:22:01.022799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.022999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.023013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.023034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.023048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:06.147 [2024-11-20 09:22:01.023065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.023991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:06.148 [2024-11-20 09:22:01.024107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:06.149 [2024-11-20 09:22:01.024351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:06.149 [2024-11-20 09:22:01.024394] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:25:06.149 [2024-11-20 09:22:01.024427] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:06.149 [2024-11-20 09:22:01.024453] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:06.149 [2024-11-20 09:22:01.024466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:06.149 [2024-11-20 09:22:01.024484] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:06.149 [2024-11-20 09:22:01.024496] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:06.149 [2024-11-20 09:22:01.024513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:06.149 [2024-11-20 09:22:01.024526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:06.149 [2024-11-20 09:22:01.024542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:06.149 [2024-11-20 09:22:01.024553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:06.149 [2024-11-20 09:22:01.024573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.149 [2024-11-20 09:22:01.024586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:06.149 [2024-11-20 09:22:01.024605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:25:06.149 [2024-11-20 09:22:01.024618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.043282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.149 [2024-11-20 09:22:01.043362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:06.149 [2024-11-20 09:22:01.043397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.541 ms 00:25:06.149 [2024-11-20 09:22:01.043412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.044049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:06.149 [2024-11-20 09:22:01.044074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:06.149 [2024-11-20 09:22:01.044102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:25:06.149 [2024-11-20 09:22:01.044122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.106022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.149 [2024-11-20 09:22:01.106112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:06.149 [2024-11-20 09:22:01.106159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.149 [2024-11-20 09:22:01.106174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.106348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.149 [2024-11-20 09:22:01.106366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:06.149 [2024-11-20 09:22:01.106382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.149 [2024-11-20 09:22:01.106398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.106497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.149 [2024-11-20 09:22:01.106517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:06.149 [2024-11-20 09:22:01.106536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.149 [2024-11-20 09:22:01.106548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.106578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.149 [2024-11-20 09:22:01.106592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:06.149 [2024-11-20 09:22:01.106608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.149 [2024-11-20 09:22:01.106619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.149 [2024-11-20 09:22:01.222165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.149 [2024-11-20 09:22:01.222268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:06.149 [2024-11-20 09:22:01.222309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.149 [2024-11-20 09:22:01.222324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.316944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:06.408 [2024-11-20 09:22:01.317066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.317232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:06.408 [2024-11-20 09:22:01.317279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.317336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:06.408 [2024-11-20 09:22:01.317366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.317517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:06.408 [2024-11-20 09:22:01.317554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.317623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:06.408 [2024-11-20 09:22:01.317712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.317782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:06.408 [2024-11-20 09:22:01.317819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.317895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:06.408 [2024-11-20 09:22:01.317921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:06.408 [2024-11-20 09:22:01.317937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:06.408 [2024-11-20 09:22:01.317949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:06.408 [2024-11-20 09:22:01.318205] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.564 ms, result 0 00:25:07.406 09:22:02 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:07.406 09:22:02 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:07.406 [2024-11-20 09:22:02.521223] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:07.664 [2024-11-20 09:22:02.521901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78650 ] 00:25:07.664 [2024-11-20 09:22:02.711352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.922 [2024-11-20 09:22:02.853384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.179 [2024-11-20 09:22:03.242526] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.179 [2024-11-20 09:22:03.242630] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:08.437 [2024-11-20 09:22:03.410039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.437 [2024-11-20 09:22:03.410457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:08.437 [2024-11-20 09:22:03.410492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:08.437 [2024-11-20 09:22:03.410507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.437 [2024-11-20 09:22:03.414393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.437 [2024-11-20 09:22:03.414472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.437 [2024-11-20 09:22:03.414492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.829 ms 00:25:08.437 [2024-11-20 09:22:03.414504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.437 [2024-11-20 09:22:03.414797] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:08.437 [2024-11-20 09:22:03.415831] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:08.438 [2024-11-20 09:22:03.415876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.415891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.438 [2024-11-20 09:22:03.415906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:25:08.438 [2024-11-20 09:22:03.415918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.418236] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:08.438 [2024-11-20 09:22:03.436955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.437292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:08.438 [2024-11-20 09:22:03.437329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.711 ms 00:25:08.438 [2024-11-20 09:22:03.437344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.437597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.437626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:08.438 [2024-11-20 09:22:03.437642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:08.438 [2024-11-20 09:22:03.437685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.448459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.448818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.438 [2024-11-20 09:22:03.448858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.691 ms 00:25:08.438 [2024-11-20 09:22:03.448873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.449150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.449174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.438 [2024-11-20 09:22:03.449189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:08.438 [2024-11-20 09:22:03.449201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.449245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.449268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:08.438 [2024-11-20 09:22:03.449282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:08.438 [2024-11-20 09:22:03.449294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.449331] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:08.438 [2024-11-20 09:22:03.454800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.454857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.438 [2024-11-20 09:22:03.454876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.479 ms 00:25:08.438 [2024-11-20 09:22:03.454887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.454995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.455014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:08.438 [2024-11-20 09:22:03.455029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:08.438 [2024-11-20 09:22:03.455041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.455074] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:08.438 [2024-11-20 09:22:03.455125] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:08.438 [2024-11-20 09:22:03.455172] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:08.438 [2024-11-20 09:22:03.455194] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:08.438 [2024-11-20 09:22:03.455307] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:08.438 [2024-11-20 09:22:03.455325] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:08.438 [2024-11-20 09:22:03.455342] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:08.438 [2024-11-20 09:22:03.455357] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:08.438 [2024-11-20 09:22:03.455377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:08.438 [2024-11-20 09:22:03.455390] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:08.438 [2024-11-20 09:22:03.455403] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:08.438 [2024-11-20 09:22:03.455415] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:08.438 [2024-11-20 09:22:03.455427] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:08.438 [2024-11-20 09:22:03.455440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.455452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:08.438 [2024-11-20 09:22:03.455465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:25:08.438 [2024-11-20 09:22:03.455477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.455589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.438 [2024-11-20 09:22:03.455606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:08.438 [2024-11-20 09:22:03.455625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:08.438 [2024-11-20 09:22:03.455637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.438 [2024-11-20 09:22:03.455790] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:08.438 [2024-11-20 09:22:03.455811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:08.438 [2024-11-20 09:22:03.455825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.438 [2024-11-20 09:22:03.455838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.438 [2024-11-20 09:22:03.455851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:08.438 [2024-11-20 09:22:03.455862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:08.438 [2024-11-20 09:22:03.455874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:08.438 [2024-11-20 09:22:03.455886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:08.438 [2024-11-20 09:22:03.455898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:08.438 [2024-11-20 09:22:03.455908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.438 [2024-11-20 09:22:03.455919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:08.438 [2024-11-20 09:22:03.455930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:08.438 [2024-11-20 09:22:03.455940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:08.438 [2024-11-20 09:22:03.455968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:08.438 [2024-11-20 09:22:03.455980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:08.438 [2024-11-20 09:22:03.455991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:08.438 [2024-11-20 09:22:03.456014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:08.438 [2024-11-20 09:22:03.456025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:08.438 [2024-11-20 09:22:03.456048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.438 [2024-11-20 09:22:03.456071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:08.438 [2024-11-20 09:22:03.456082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.438 [2024-11-20 09:22:03.456104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:08.438 [2024-11-20 09:22:03.456115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.438 [2024-11-20 09:22:03.456137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:08.438 [2024-11-20 09:22:03.456148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:08.438 [2024-11-20 09:22:03.456170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:08.438 [2024-11-20 09:22:03.456181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.438 [2024-11-20 09:22:03.456203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:08.438 [2024-11-20 09:22:03.456214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:08.438 [2024-11-20 09:22:03.456224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:08.438 [2024-11-20 09:22:03.456236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:08.438 [2024-11-20 09:22:03.456247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:08.438 [2024-11-20 09:22:03.456257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:08.438 [2024-11-20 09:22:03.456280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:08.438 [2024-11-20 09:22:03.456292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.438 [2024-11-20 09:22:03.456303] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:08.438 [2024-11-20 09:22:03.456317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:08.438 [2024-11-20 09:22:03.456329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:08.439 [2024-11-20 09:22:03.456347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:08.439 [2024-11-20 09:22:03.456360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:08.439 [2024-11-20 09:22:03.456371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:08.439 [2024-11-20 09:22:03.456382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:08.439 [2024-11-20 09:22:03.456394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:08.439 [2024-11-20 09:22:03.456404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:08.439 [2024-11-20 09:22:03.456418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:08.439 [2024-11-20 09:22:03.456431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:08.439 [2024-11-20 09:22:03.456446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.456460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:08.439 [2024-11-20 09:22:03.456471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:08.439 [2024-11-20 09:22:03.456494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:08.439 [2024-11-20 09:22:03.456506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:08.439 [2024-11-20 09:22:03.456518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:08.439 [2024-11-20 09:22:03.456529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:08.439 [2024-11-20 09:22:03.456548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:08.439 [2024-11-20 09:22:03.456559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:08.439 [2024-11-20 09:22:03.456571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:08.439 [2024-11-20 09:22:03.456583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.456594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.456606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.456618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.456630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:08.439 [2024-11-20 09:22:03.456642] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:08.439 [2024-11-20 09:22:03.457176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.457253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:08.439 [2024-11-20 09:22:03.457485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:08.439 [2024-11-20 09:22:03.457550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:08.439 [2024-11-20 09:22:03.457767] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:08.439 [2024-11-20 09:22:03.457937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.439 [2024-11-20 09:22:03.457991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:08.439 [2024-11-20 09:22:03.458110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:25:08.439 [2024-11-20 09:22:03.458183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.439 [2024-11-20 09:22:03.500314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.439 [2024-11-20 09:22:03.500682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.439 [2024-11-20 09:22:03.500825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.996 ms 00:25:08.439 [2024-11-20 09:22:03.500890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.439 [2024-11-20 09:22:03.501261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.439 [2024-11-20 09:22:03.501393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:08.439 [2024-11-20 09:22:03.501524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:08.439 [2024-11-20 09:22:03.501551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.560196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.560291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.696 [2024-11-20 09:22:03.560321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.592 ms 00:25:08.696 [2024-11-20 09:22:03.560335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.560511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.560533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.696 [2024-11-20 09:22:03.560548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:08.696 [2024-11-20 09:22:03.560561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.561221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.561248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.696 [2024-11-20 09:22:03.561263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:25:08.696 [2024-11-20 09:22:03.561286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.561467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.561488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.696 [2024-11-20 09:22:03.561501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:25:08.696 [2024-11-20 09:22:03.561513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.582375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.582473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.696 [2024-11-20 09:22:03.582497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.828 ms 00:25:08.696 [2024-11-20 09:22:03.582510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.601128] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:08.696 [2024-11-20 09:22:03.601223] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:08.696 [2024-11-20 09:22:03.601250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.601266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:08.696 [2024-11-20 09:22:03.601284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:25:08.696 [2024-11-20 09:22:03.601297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.634703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.635155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:08.696 [2024-11-20 09:22:03.635203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.139 ms 00:25:08.696 [2024-11-20 09:22:03.635225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.655875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.655998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:08.696 [2024-11-20 09:22:03.656022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.394 ms 00:25:08.696 [2024-11-20 09:22:03.656035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.674844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.674943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:08.696 [2024-11-20 09:22:03.674967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.552 ms 00:25:08.696 [2024-11-20 09:22:03.674980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.676100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.676143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:08.696 [2024-11-20 09:22:03.676162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:25:08.696 [2024-11-20 09:22:03.676175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.763859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.763967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:08.696 [2024-11-20 09:22:03.763993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.641 ms 00:25:08.696 [2024-11-20 09:22:03.764007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.781537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:08.696 [2024-11-20 09:22:03.806664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.806831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:08.696 [2024-11-20 09:22:03.806861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.431 ms 00:25:08.696 [2024-11-20 09:22:03.806874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.807165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.807189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:08.696 [2024-11-20 09:22:03.807207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:08.696 [2024-11-20 09:22:03.807220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.807355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.807382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:08.696 [2024-11-20 09:22:03.807397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:08.696 [2024-11-20 09:22:03.807409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.807464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.807485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:08.696 [2024-11-20 09:22:03.807498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:08.696 [2024-11-20 09:22:03.807510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.696 [2024-11-20 09:22:03.807563] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:08.696 [2024-11-20 09:22:03.807582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.696 [2024-11-20 09:22:03.807595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:08.696 [2024-11-20 09:22:03.807608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:08.696 [2024-11-20 09:22:03.807620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.954 [2024-11-20 09:22:03.844504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.954 [2024-11-20 09:22:03.844635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:08.954 [2024-11-20 09:22:03.844682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.800 ms 00:25:08.954 [2024-11-20 09:22:03.844706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.954 [2024-11-20 09:22:03.845016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.954 [2024-11-20 09:22:03.845038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:08.954 [2024-11-20 09:22:03.845054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:08.954 [2024-11-20 09:22:03.845067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.954 [2024-11-20 09:22:03.846403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:08.954 [2024-11-20 09:22:03.852040] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 435.975 ms, result 0 00:25:08.954 [2024-11-20 09:22:03.853284] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:08.954 [2024-11-20 09:22:03.871154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.886  [2024-11-20T09:22:05.938Z] Copying: 25/256 [MB] (25 MBps) [2024-11-20T09:22:07.312Z] Copying: 48/256 [MB] (23 MBps) [2024-11-20T09:22:08.247Z] Copying: 73/256 [MB] (24 MBps) [2024-11-20T09:22:09.183Z] Copying: 94/256 [MB] (21 MBps) [2024-11-20T09:22:10.115Z] Copying: 114/256 [MB] (19 MBps) [2024-11-20T09:22:11.049Z] Copying: 136/256 [MB] (22 MBps) [2024-11-20T09:22:11.983Z] Copying: 159/256 [MB] (22 MBps) [2024-11-20T09:22:12.917Z] Copying: 182/256 [MB] (23 MBps) [2024-11-20T09:22:14.292Z] Copying: 202/256 [MB] (19 MBps) [2024-11-20T09:22:15.225Z] Copying: 224/256 [MB] (21 MBps) [2024-11-20T09:22:15.484Z] Copying: 244/256 [MB] (20 MBps) [2024-11-20T09:22:15.484Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-20 09:22:15.449646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:20.364 [2024-11-20 09:22:15.463716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.364 [2024-11-20 09:22:15.463811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:20.364 [2024-11-20 09:22:15.463834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:20.364 [2024-11-20 09:22:15.463878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.364 [2024-11-20 09:22:15.463918] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:20.364 [2024-11-20 09:22:15.467840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.364 [2024-11-20 09:22:15.467901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:20.364 [2024-11-20 09:22:15.467920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.894 ms 00:25:20.364 [2024-11-20 09:22:15.467932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.364 [2024-11-20 09:22:15.468279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.364 [2024-11-20 09:22:15.468300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:20.364 [2024-11-20 09:22:15.468314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:25:20.364 [2024-11-20 09:22:15.468326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.364 [2024-11-20 09:22:15.472041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.364 [2024-11-20 09:22:15.472113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:20.364 [2024-11-20 09:22:15.472132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:25:20.364 [2024-11-20 09:22:15.472145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.364 [2024-11-20 09:22:15.479570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.364 [2024-11-20 09:22:15.479988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:20.364 [2024-11-20 09:22:15.480025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.379 ms 00:25:20.364 [2024-11-20 09:22:15.480039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.516730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.517124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:20.624 [2024-11-20 09:22:15.517158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.528 ms 00:25:20.624 [2024-11-20 09:22:15.517173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.538258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.538407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:20.624 [2024-11-20 09:22:15.538430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.890 ms 00:25:20.624 [2024-11-20 09:22:15.538457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.538741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.538765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:20.624 [2024-11-20 09:22:15.538780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:25:20.624 [2024-11-20 09:22:15.538792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.576317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.576417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:20.624 [2024-11-20 09:22:15.576440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.465 ms 00:25:20.624 [2024-11-20 09:22:15.576453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.613024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.613121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:20.624 [2024-11-20 09:22:15.613145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.413 ms 00:25:20.624 [2024-11-20 09:22:15.613158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.648848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.648941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:20.624 [2024-11-20 09:22:15.648964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.545 ms 00:25:20.624 [2024-11-20 09:22:15.648976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.684127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.624 [2024-11-20 09:22:15.684224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:20.624 [2024-11-20 09:22:15.684248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.943 ms 00:25:20.624 [2024-11-20 09:22:15.684260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.624 [2024-11-20 09:22:15.684383] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:20.624 [2024-11-20 09:22:15.684412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:20.624 [2024-11-20 09:22:15.684886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.684995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:20.625 [2024-11-20 09:22:15.685526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:20.626 [2024-11-20 09:22:15.685782] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:20.626 [2024-11-20 09:22:15.685795] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:25:20.626 [2024-11-20 09:22:15.685808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:20.626 [2024-11-20 09:22:15.685820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:20.626 [2024-11-20 09:22:15.685831] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:20.626 [2024-11-20 09:22:15.685844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:20.626 [2024-11-20 09:22:15.685855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:20.626 [2024-11-20 09:22:15.685867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:20.626 [2024-11-20 09:22:15.685879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:20.626 [2024-11-20 09:22:15.685890] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:20.626 [2024-11-20 09:22:15.685901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:20.626 [2024-11-20 09:22:15.685914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.626 [2024-11-20 09:22:15.685940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:20.626 [2024-11-20 09:22:15.685962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:25:20.626 [2024-11-20 09:22:15.685975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.626 [2024-11-20 09:22:15.704749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.626 [2024-11-20 09:22:15.704836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:20.626 [2024-11-20 09:22:15.704860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.718 ms 00:25:20.626 [2024-11-20 09:22:15.704872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.626 [2024-11-20 09:22:15.705453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.626 [2024-11-20 09:22:15.705478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:20.626 [2024-11-20 09:22:15.705493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:25:20.626 [2024-11-20 09:22:15.705505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.884 [2024-11-20 09:22:15.755126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.884 [2024-11-20 09:22:15.755212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.884 [2024-11-20 09:22:15.755234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.884 [2024-11-20 09:22:15.755246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.884 [2024-11-20 09:22:15.755454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.884 [2024-11-20 09:22:15.755475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.884 [2024-11-20 09:22:15.755489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.884 [2024-11-20 09:22:15.755501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.884 [2024-11-20 09:22:15.755599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.884 [2024-11-20 09:22:15.755620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.884 [2024-11-20 09:22:15.755633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.884 [2024-11-20 09:22:15.755676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.755711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.755740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.885 [2024-11-20 09:22:15.755760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.755772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.872613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.872714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.885 [2024-11-20 09:22:15.872738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.872751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.970373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.970488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.885 [2024-11-20 09:22:15.970511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.970525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.970634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.970688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:20.885 [2024-11-20 09:22:15.970704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.970717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.970768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.970784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:20.885 [2024-11-20 09:22:15.970805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.970824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.970958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.970978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:20.885 [2024-11-20 09:22:15.970992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.971003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.971057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.971076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:20.885 [2024-11-20 09:22:15.971088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.971120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.971175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.971191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:20.885 [2024-11-20 09:22:15.971204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.971216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.971274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.885 [2024-11-20 09:22:15.971292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:20.885 [2024-11-20 09:22:15.971311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.885 [2024-11-20 09:22:15.971323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.885 [2024-11-20 09:22:15.971574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.870 ms, result 0 00:25:22.260 00:25:22.260 00:25:22.260 09:22:17 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:25:22.260 09:22:17 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:22.517 09:22:17 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:22.775 [2024-11-20 09:22:17.681210] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:22.775 [2024-11-20 09:22:17.681387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78805 ] 00:25:22.775 [2024-11-20 09:22:17.858562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.033 [2024-11-20 09:22:17.996504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.291 [2024-11-20 09:22:18.379320] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.291 [2024-11-20 09:22:18.379677] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:23.550 [2024-11-20 09:22:18.545334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.545424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:23.550 [2024-11-20 09:22:18.545446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:23.550 [2024-11-20 09:22:18.545459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.549236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.549298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:23.550 [2024-11-20 09:22:18.549319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.744 ms 00:25:23.550 [2024-11-20 09:22:18.549331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.549580] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:23.550 [2024-11-20 09:22:18.550673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:23.550 [2024-11-20 09:22:18.550716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.550731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:23.550 [2024-11-20 09:22:18.550745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:25:23.550 [2024-11-20 09:22:18.550757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.552893] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:23.550 [2024-11-20 09:22:18.571896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.572180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:23.550 [2024-11-20 09:22:18.572214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.999 ms 00:25:23.550 [2024-11-20 09:22:18.572229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.572442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.572464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:23.550 [2024-11-20 09:22:18.572479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:23.550 [2024-11-20 09:22:18.572492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.583001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.583425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:23.550 [2024-11-20 09:22:18.583461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.436 ms 00:25:23.550 [2024-11-20 09:22:18.583475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.583755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.583782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:23.550 [2024-11-20 09:22:18.583797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:25:23.550 [2024-11-20 09:22:18.583810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.583857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.583880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:23.550 [2024-11-20 09:22:18.583893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:23.550 [2024-11-20 09:22:18.583905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.583941] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:23.550 [2024-11-20 09:22:18.589320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.589385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:23.550 [2024-11-20 09:22:18.589405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.387 ms 00:25:23.550 [2024-11-20 09:22:18.589418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.589531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.550 [2024-11-20 09:22:18.589551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:23.550 [2024-11-20 09:22:18.589565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:23.550 [2024-11-20 09:22:18.589577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.550 [2024-11-20 09:22:18.589610] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:23.550 [2024-11-20 09:22:18.589669] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:23.550 [2024-11-20 09:22:18.589728] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:23.550 [2024-11-20 09:22:18.589752] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:23.551 [2024-11-20 09:22:18.589869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:23.551 [2024-11-20 09:22:18.589885] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:23.551 [2024-11-20 09:22:18.589902] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:23.551 [2024-11-20 09:22:18.589918] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:23.551 [2024-11-20 09:22:18.589938] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:23.551 [2024-11-20 09:22:18.589951] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:23.551 [2024-11-20 09:22:18.589963] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:23.551 [2024-11-20 09:22:18.589974] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:23.551 [2024-11-20 09:22:18.589986] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:23.551 [2024-11-20 09:22:18.589999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.551 [2024-11-20 09:22:18.590011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:23.551 [2024-11-20 09:22:18.590024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:25:23.551 [2024-11-20 09:22:18.590035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.551 [2024-11-20 09:22:18.590140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.551 [2024-11-20 09:22:18.590174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:23.551 [2024-11-20 09:22:18.590194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:23.551 [2024-11-20 09:22:18.590207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.551 [2024-11-20 09:22:18.590330] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:23.551 [2024-11-20 09:22:18.590354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:23.551 [2024-11-20 09:22:18.590368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:23.551 [2024-11-20 09:22:18.590404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:23.551 [2024-11-20 09:22:18.590437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.551 [2024-11-20 09:22:18.590460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:23.551 [2024-11-20 09:22:18.590471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:23.551 [2024-11-20 09:22:18.590482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:23.551 [2024-11-20 09:22:18.590508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:23.551 [2024-11-20 09:22:18.590521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:23.551 [2024-11-20 09:22:18.590531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:23.551 [2024-11-20 09:22:18.590553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:23.551 [2024-11-20 09:22:18.590585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:23.551 [2024-11-20 09:22:18.590617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:23.551 [2024-11-20 09:22:18.590666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:23.551 [2024-11-20 09:22:18.590701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:23.551 [2024-11-20 09:22:18.590733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.551 [2024-11-20 09:22:18.590754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:23.551 [2024-11-20 09:22:18.590765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:23.551 [2024-11-20 09:22:18.590776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:23.551 [2024-11-20 09:22:18.590786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:23.551 [2024-11-20 09:22:18.590797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:23.551 [2024-11-20 09:22:18.590807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:23.551 [2024-11-20 09:22:18.590831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:23.551 [2024-11-20 09:22:18.590842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590852] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:23.551 [2024-11-20 09:22:18.590864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:23.551 [2024-11-20 09:22:18.590877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:23.551 [2024-11-20 09:22:18.590906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:23.551 [2024-11-20 09:22:18.590917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:23.551 [2024-11-20 09:22:18.590928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:23.551 [2024-11-20 09:22:18.590939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:23.551 [2024-11-20 09:22:18.590949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:23.551 [2024-11-20 09:22:18.590961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:23.551 [2024-11-20 09:22:18.590974] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:23.551 [2024-11-20 09:22:18.590989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.551 [2024-11-20 09:22:18.591002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:23.551 [2024-11-20 09:22:18.591014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:23.551 [2024-11-20 09:22:18.591025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:23.551 [2024-11-20 09:22:18.591037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:23.551 [2024-11-20 09:22:18.591048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:23.551 [2024-11-20 09:22:18.591058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:23.551 [2024-11-20 09:22:18.591070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:23.551 [2024-11-20 09:22:18.591081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:23.552 [2024-11-20 09:22:18.591093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:23.552 [2024-11-20 09:22:18.591104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:23.552 [2024-11-20 09:22:18.591115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:23.552 [2024-11-20 09:22:18.591126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:23.552 [2024-11-20 09:22:18.591137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:23.552 [2024-11-20 09:22:18.591149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:23.552 [2024-11-20 09:22:18.591161] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:23.552 [2024-11-20 09:22:18.591174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:23.552 [2024-11-20 09:22:18.591187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:23.552 [2024-11-20 09:22:18.591199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:23.552 [2024-11-20 09:22:18.591211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:23.552 [2024-11-20 09:22:18.591223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:23.552 [2024-11-20 09:22:18.591236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.552 [2024-11-20 09:22:18.591248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:23.552 [2024-11-20 09:22:18.591265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:25:23.552 [2024-11-20 09:22:18.591277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.552 [2024-11-20 09:22:18.633341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.552 [2024-11-20 09:22:18.633743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:23.552 [2024-11-20 09:22:18.633779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.981 ms 00:25:23.552 [2024-11-20 09:22:18.633793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.552 [2024-11-20 09:22:18.634026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.552 [2024-11-20 09:22:18.634056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:23.552 [2024-11-20 09:22:18.634071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:25:23.552 [2024-11-20 09:22:18.634082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.692426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.692508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:23.811 [2024-11-20 09:22:18.692532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.309 ms 00:25:23.811 [2024-11-20 09:22:18.692553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.692777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.692802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:23.811 [2024-11-20 09:22:18.692817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:23.811 [2024-11-20 09:22:18.692829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.693445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.693472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:23.811 [2024-11-20 09:22:18.693486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:25:23.811 [2024-11-20 09:22:18.693508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.693717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.693748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:23.811 [2024-11-20 09:22:18.693761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:25:23.811 [2024-11-20 09:22:18.693773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.714662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.714739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:23.811 [2024-11-20 09:22:18.714762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.853 ms 00:25:23.811 [2024-11-20 09:22:18.714775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.733583] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:25:23.811 [2024-11-20 09:22:18.733887] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:23.811 [2024-11-20 09:22:18.733919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.733936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:23.811 [2024-11-20 09:22:18.733953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.941 ms 00:25:23.811 [2024-11-20 09:22:18.733966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.766960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.767070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:23.811 [2024-11-20 09:22:18.767094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.757 ms 00:25:23.811 [2024-11-20 09:22:18.767107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.786306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.786403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:23.811 [2024-11-20 09:22:18.786425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.986 ms 00:25:23.811 [2024-11-20 09:22:18.786439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.805795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.805873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:23.811 [2024-11-20 09:22:18.805895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.162 ms 00:25:23.811 [2024-11-20 09:22:18.805907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.807028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.807068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:23.811 [2024-11-20 09:22:18.807087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:25:23.811 [2024-11-20 09:22:18.807100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.896976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:23.811 [2024-11-20 09:22:18.897072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:23.811 [2024-11-20 09:22:18.897122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.833 ms 00:25:23.811 [2024-11-20 09:22:18.897136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:23.811 [2024-11-20 09:22:18.915155] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:24.070 [2024-11-20 09:22:18.939748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.939842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:24.070 [2024-11-20 09:22:18.939868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.403 ms 00:25:24.070 [2024-11-20 09:22:18.939882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.940098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.940120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:24.070 [2024-11-20 09:22:18.940136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:24.070 [2024-11-20 09:22:18.940149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.940230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.940248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:24.070 [2024-11-20 09:22:18.940261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:24.070 [2024-11-20 09:22:18.940274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.940317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.940337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:24.070 [2024-11-20 09:22:18.940350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:24.070 [2024-11-20 09:22:18.940361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.940429] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:24.070 [2024-11-20 09:22:18.940447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.940459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:24.070 [2024-11-20 09:22:18.940471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:24.070 [2024-11-20 09:22:18.940482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.976711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.976812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:24.070 [2024-11-20 09:22:18.976839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.186 ms 00:25:24.070 [2024-11-20 09:22:18.976853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.977116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:18.977140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:24.070 [2024-11-20 09:22:18.977154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:24.070 [2024-11-20 09:22:18.977167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:18.978764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:24.070 [2024-11-20 09:22:18.984399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.047 ms, result 0 00:25:24.070 [2024-11-20 09:22:18.985518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:24.070 [2024-11-20 09:22:19.003599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:24.070  [2024-11-20T09:22:19.190Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-11-20 09:22:19.164439] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:24.070 [2024-11-20 09:22:19.178307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:19.178697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:24.070 [2024-11-20 09:22:19.178731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:24.070 [2024-11-20 09:22:19.178763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:19.178831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:24.070 [2024-11-20 09:22:19.182642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:19.182705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:24.070 [2024-11-20 09:22:19.182725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.783 ms 00:25:24.070 [2024-11-20 09:22:19.182737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.070 [2024-11-20 09:22:19.184590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.070 [2024-11-20 09:22:19.184668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:24.070 [2024-11-20 09:22:19.184692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:25:24.070 [2024-11-20 09:22:19.184704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.188748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.188826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:24.328 [2024-11-20 09:22:19.188847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.014 ms 00:25:24.328 [2024-11-20 09:22:19.188859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.196236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.196315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:24.328 [2024-11-20 09:22:19.196334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.326 ms 00:25:24.328 [2024-11-20 09:22:19.196348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.234186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.234307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:24.328 [2024-11-20 09:22:19.234331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.740 ms 00:25:24.328 [2024-11-20 09:22:19.234343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.256478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.256593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:24.328 [2024-11-20 09:22:19.256624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.952 ms 00:25:24.328 [2024-11-20 09:22:19.256637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.256951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.256974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:24.328 [2024-11-20 09:22:19.256989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:25:24.328 [2024-11-20 09:22:19.257001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.295900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.296235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:24.328 [2024-11-20 09:22:19.296289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.849 ms 00:25:24.328 [2024-11-20 09:22:19.296312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.335067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.335229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:24.328 [2024-11-20 09:22:19.335269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.606 ms 00:25:24.328 [2024-11-20 09:22:19.335291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.372411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.372514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:24.328 [2024-11-20 09:22:19.372537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.917 ms 00:25:24.328 [2024-11-20 09:22:19.372551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.409872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.328 [2024-11-20 09:22:19.410324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:24.328 [2024-11-20 09:22:19.410368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.079 ms 00:25:24.328 [2024-11-20 09:22:19.410383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.328 [2024-11-20 09:22:19.410525] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:24.328 [2024-11-20 09:22:19.410556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.410988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:24.329 [2024-11-20 09:22:19.411722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:24.330 [2024-11-20 09:22:19.411903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:24.330 [2024-11-20 09:22:19.411916] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:25:24.330 [2024-11-20 09:22:19.411930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:24.330 [2024-11-20 09:22:19.411942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:24.330 [2024-11-20 09:22:19.411959] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:24.330 [2024-11-20 09:22:19.411976] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:24.330 [2024-11-20 09:22:19.411988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:24.330 [2024-11-20 09:22:19.412000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:24.330 [2024-11-20 09:22:19.412012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:24.330 [2024-11-20 09:22:19.412023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:24.330 [2024-11-20 09:22:19.412033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:24.330 [2024-11-20 09:22:19.412055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.330 [2024-11-20 09:22:19.412089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:24.330 [2024-11-20 09:22:19.412105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:25:24.330 [2024-11-20 09:22:19.412117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.330 [2024-11-20 09:22:19.431613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.330 [2024-11-20 09:22:19.431723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:24.330 [2024-11-20 09:22:19.431746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.443 ms 00:25:24.330 [2024-11-20 09:22:19.431759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.330 [2024-11-20 09:22:19.432402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.330 [2024-11-20 09:22:19.432431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:24.330 [2024-11-20 09:22:19.432448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:25:24.330 [2024-11-20 09:22:19.432467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.588 [2024-11-20 09:22:19.482689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.588 [2024-11-20 09:22:19.482781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.588 [2024-11-20 09:22:19.482805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.588 [2024-11-20 09:22:19.482817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.588 [2024-11-20 09:22:19.482996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.588 [2024-11-20 09:22:19.483014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.588 [2024-11-20 09:22:19.483028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.588 [2024-11-20 09:22:19.483040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.588 [2024-11-20 09:22:19.483128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.588 [2024-11-20 09:22:19.483147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.588 [2024-11-20 09:22:19.483159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.588 [2024-11-20 09:22:19.483171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.588 [2024-11-20 09:22:19.483198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.588 [2024-11-20 09:22:19.483219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.588 [2024-11-20 09:22:19.483232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.588 [2024-11-20 09:22:19.483243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.588 [2024-11-20 09:22:19.602310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.588 [2024-11-20 09:22:19.602416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.589 [2024-11-20 09:22:19.602438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.602451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.701565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.701970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.589 [2024-11-20 09:22:19.702011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.702136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.702180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.589 [2024-11-20 09:22:19.702196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.702250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.702264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.589 [2024-11-20 09:22:19.702291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.702451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.702471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.589 [2024-11-20 09:22:19.702485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.702579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.702597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:24.589 [2024-11-20 09:22:19.702610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.702697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.702715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.589 [2024-11-20 09:22:19.702728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.702796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.589 [2024-11-20 09:22:19.702812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.589 [2024-11-20 09:22:19.702830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.589 [2024-11-20 09:22:19.702841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.589 [2024-11-20 09:22:19.703018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.725 ms, result 0 00:25:25.964 00:25:25.964 00:25:25.964 09:22:20 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78836 00:25:25.964 09:22:20 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:25:25.964 09:22:20 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78836 00:25:25.964 09:22:20 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78836 ']' 00:25:25.964 09:22:20 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.964 09:22:20 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.964 09:22:20 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.964 09:22:20 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.964 09:22:20 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:25.964 [2024-11-20 09:22:21.031034] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:25.964 [2024-11-20 09:22:21.031192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78836 ] 00:25:26.221 [2024-11-20 09:22:21.208571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.478 [2024-11-20 09:22:21.346333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.410 09:22:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.410 09:22:22 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:25:27.410 09:22:22 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:25:27.667 [2024-11-20 09:22:22.657409] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:27.667 [2024-11-20 09:22:22.657529] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:27.925 [2024-11-20 09:22:22.853859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.853971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:27.925 [2024-11-20 09:22:22.854008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:27.925 [2024-11-20 09:22:22.854027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.859297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.859420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:27.925 [2024-11-20 09:22:22.859465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:25:27.925 [2024-11-20 09:22:22.859491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.860067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:27.925 [2024-11-20 09:22:22.861590] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:27.925 [2024-11-20 09:22:22.861873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.861909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:27.925 [2024-11-20 09:22:22.861941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:25:27.925 [2024-11-20 09:22:22.861963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.864993] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:27.925 [2024-11-20 09:22:22.893821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.894035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:27.925 [2024-11-20 09:22:22.894080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.839 ms 00:25:27.925 [2024-11-20 09:22:22.894114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.894493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.894540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:27.925 [2024-11-20 09:22:22.894573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:27.925 [2024-11-20 09:22:22.894604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.906756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.906938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:27.925 [2024-11-20 09:22:22.906983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.957 ms 00:25:27.925 [2024-11-20 09:22:22.907014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.907440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.907502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:27.925 [2024-11-20 09:22:22.907535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:25:27.925 [2024-11-20 09:22:22.907572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.907724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.907770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:27.925 [2024-11-20 09:22:22.907799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:27.925 [2024-11-20 09:22:22.907833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.907930] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:27.925 [2024-11-20 09:22:22.915674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.916197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:27.925 [2024-11-20 09:22:22.916278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.740 ms 00:25:27.925 [2024-11-20 09:22:22.916306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.916510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.916542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:27.925 [2024-11-20 09:22:22.916580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:27.925 [2024-11-20 09:22:22.916622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.916736] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:27.925 [2024-11-20 09:22:22.916794] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:27.925 [2024-11-20 09:22:22.916896] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:27.925 [2024-11-20 09:22:22.916937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:27.925 [2024-11-20 09:22:22.917134] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:27.925 [2024-11-20 09:22:22.917179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:27.925 [2024-11-20 09:22:22.917244] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:27.925 [2024-11-20 09:22:22.917279] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:27.925 [2024-11-20 09:22:22.917320] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:27.925 [2024-11-20 09:22:22.917347] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:27.925 [2024-11-20 09:22:22.917381] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:27.925 [2024-11-20 09:22:22.917406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:27.925 [2024-11-20 09:22:22.917444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:27.925 [2024-11-20 09:22:22.917468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.917498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:27.925 [2024-11-20 09:22:22.917523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:25:27.925 [2024-11-20 09:22:22.917574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.917835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.925 [2024-11-20 09:22:22.917891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:27.925 [2024-11-20 09:22:22.917919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:27.925 [2024-11-20 09:22:22.917961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.925 [2024-11-20 09:22:22.918139] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:27.925 [2024-11-20 09:22:22.918220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:27.925 [2024-11-20 09:22:22.918246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:27.925 [2024-11-20 09:22:22.918280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.925 [2024-11-20 09:22:22.918304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:27.925 [2024-11-20 09:22:22.918333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:27.925 [2024-11-20 09:22:22.918372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:27.925 [2024-11-20 09:22:22.918412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:27.925 [2024-11-20 09:22:22.918436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:27.925 [2024-11-20 09:22:22.918468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:27.925 [2024-11-20 09:22:22.918488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:27.925 [2024-11-20 09:22:22.918517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:27.925 [2024-11-20 09:22:22.918535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:27.925 [2024-11-20 09:22:22.918567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:27.925 [2024-11-20 09:22:22.918589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:27.925 [2024-11-20 09:22:22.918617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.925 [2024-11-20 09:22:22.918640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:27.926 [2024-11-20 09:22:22.918689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:27.926 [2024-11-20 09:22:22.918715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.926 [2024-11-20 09:22:22.918748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:27.926 [2024-11-20 09:22:22.918792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:27.926 [2024-11-20 09:22:22.918826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.926 [2024-11-20 09:22:22.918849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:27.926 [2024-11-20 09:22:22.918893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:27.926 [2024-11-20 09:22:22.918917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.926 [2024-11-20 09:22:22.918952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:27.926 [2024-11-20 09:22:22.918976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:27.926 [2024-11-20 09:22:22.919010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.926 [2024-11-20 09:22:22.919033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:27.926 [2024-11-20 09:22:22.919066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:27.926 [2024-11-20 09:22:22.919089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:27.926 [2024-11-20 09:22:22.919122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:27.926 [2024-11-20 09:22:22.919147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:27.926 [2024-11-20 09:22:22.919178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:27.926 [2024-11-20 09:22:22.919198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:27.926 [2024-11-20 09:22:22.919224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:27.926 [2024-11-20 09:22:22.919238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:27.926 [2024-11-20 09:22:22.919259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:27.926 [2024-11-20 09:22:22.919273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:27.926 [2024-11-20 09:22:22.919297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.926 [2024-11-20 09:22:22.919311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:27.926 [2024-11-20 09:22:22.919330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:27.926 [2024-11-20 09:22:22.919344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.926 [2024-11-20 09:22:22.919364] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:27.926 [2024-11-20 09:22:22.919385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:27.926 [2024-11-20 09:22:22.919405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:27.926 [2024-11-20 09:22:22.919420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:27.926 [2024-11-20 09:22:22.919442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:27.926 [2024-11-20 09:22:22.919456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:27.926 [2024-11-20 09:22:22.919475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:27.926 [2024-11-20 09:22:22.919489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:27.926 [2024-11-20 09:22:22.919508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:27.926 [2024-11-20 09:22:22.919527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:27.926 [2024-11-20 09:22:22.919564] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:27.926 [2024-11-20 09:22:22.919586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.919612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:27.926 [2024-11-20 09:22:22.919627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:27.926 [2024-11-20 09:22:22.919865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:27.926 [2024-11-20 09:22:22.920114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:27.926 [2024-11-20 09:22:22.920361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:27.926 [2024-11-20 09:22:22.920562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:27.926 [2024-11-20 09:22:22.920761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:27.926 [2024-11-20 09:22:22.920924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:27.926 [2024-11-20 09:22:22.921072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:27.926 [2024-11-20 09:22:22.921310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.921553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.921744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.921789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.921815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:27.926 [2024-11-20 09:22:22.921882] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:27.926 [2024-11-20 09:22:22.921910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.921962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:27.926 [2024-11-20 09:22:22.921987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:27.926 [2024-11-20 09:22:22.922010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:27.926 [2024-11-20 09:22:22.922026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:27.926 [2024-11-20 09:22:22.922051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.926 [2024-11-20 09:22:22.922067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:27.926 [2024-11-20 09:22:22.922090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.993 ms 00:25:27.926 [2024-11-20 09:22:22.922105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.926 [2024-11-20 09:22:22.985112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.926 [2024-11-20 09:22:22.985220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:27.926 [2024-11-20 09:22:22.985271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.792 ms 00:25:27.926 [2024-11-20 09:22:22.985309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:27.926 [2024-11-20 09:22:22.985677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:27.926 [2024-11-20 09:22:22.985725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:27.926 [2024-11-20 09:22:22.985764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:27.926 [2024-11-20 09:22:22.985789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.046970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.184 [2024-11-20 09:22:23.047072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.184 [2024-11-20 09:22:23.047100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.097 ms 00:25:28.184 [2024-11-20 09:22:23.047116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.047292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.184 [2024-11-20 09:22:23.047313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.184 [2024-11-20 09:22:23.047332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.184 [2024-11-20 09:22:23.047347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.048025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.184 [2024-11-20 09:22:23.048056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.184 [2024-11-20 09:22:23.048080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:25:28.184 [2024-11-20 09:22:23.048094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.048285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.184 [2024-11-20 09:22:23.048311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.184 [2024-11-20 09:22:23.048328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:25:28.184 [2024-11-20 09:22:23.048342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.072640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.184 [2024-11-20 09:22:23.072749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.184 [2024-11-20 09:22:23.072778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.251 ms 00:25:28.184 [2024-11-20 09:22:23.072794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.092401] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:28.184 [2024-11-20 09:22:23.092521] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:28.184 [2024-11-20 09:22:23.092556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.184 [2024-11-20 09:22:23.092573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:28.184 [2024-11-20 09:22:23.092597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.531 ms 00:25:28.184 [2024-11-20 09:22:23.092611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.184 [2024-11-20 09:22:23.125980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.185 [2024-11-20 09:22:23.126109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:28.185 [2024-11-20 09:22:23.126143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.073 ms 00:25:28.185 [2024-11-20 09:22:23.126174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.185 [2024-11-20 09:22:23.145693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.185 [2024-11-20 09:22:23.145794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:28.185 [2024-11-20 09:22:23.145828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.274 ms 00:25:28.185 [2024-11-20 09:22:23.145842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.185 [2024-11-20 09:22:23.164503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.185 [2024-11-20 09:22:23.164615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:28.185 [2024-11-20 09:22:23.164666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.439 ms 00:25:28.185 [2024-11-20 09:22:23.164685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.185 [2024-11-20 09:22:23.165812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.185 [2024-11-20 09:22:23.165855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:28.185 [2024-11-20 09:22:23.165883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:25:28.185 [2024-11-20 09:22:23.165898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.185 [2024-11-20 09:22:23.261535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.185 [2024-11-20 09:22:23.261684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:28.185 [2024-11-20 09:22:23.261720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.564 ms 00:25:28.185 [2024-11-20 09:22:23.261737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.185 [2024-11-20 09:22:23.279613] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:28.443 [2024-11-20 09:22:23.303607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.303753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:28.443 [2024-11-20 09:22:23.303794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.607 ms 00:25:28.443 [2024-11-20 09:22:23.303820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.304065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.304091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:28.443 [2024-11-20 09:22:23.304107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.443 [2024-11-20 09:22:23.304123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.304218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.304245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:28.443 [2024-11-20 09:22:23.304260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:28.443 [2024-11-20 09:22:23.304276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.304319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.304338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:28.443 [2024-11-20 09:22:23.304352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:28.443 [2024-11-20 09:22:23.304368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.304424] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:28.443 [2024-11-20 09:22:23.304448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.304461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:28.443 [2024-11-20 09:22:23.304492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:28.443 [2024-11-20 09:22:23.304507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.342437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.342544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:28.443 [2024-11-20 09:22:23.342578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.847 ms 00:25:28.443 [2024-11-20 09:22:23.342595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.343133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.443 [2024-11-20 09:22:23.343167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:28.443 [2024-11-20 09:22:23.343191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:28.443 [2024-11-20 09:22:23.343214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.443 [2024-11-20 09:22:23.344590] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:28.443 [2024-11-20 09:22:23.350225] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 490.320 ms, result 0 00:25:28.443 [2024-11-20 09:22:23.351842] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.443 Some configs were skipped because the RPC state that can call them passed over. 00:25:28.443 09:22:23 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:25:28.702 [2024-11-20 09:22:23.680013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.702 [2024-11-20 09:22:23.680551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:28.702 [2024-11-20 09:22:23.680802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:25:28.702 [2024-11-20 09:22:23.680997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.702 [2024-11-20 09:22:23.681270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.319 ms, result 0 00:25:28.702 true 00:25:28.702 09:22:23 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:25:28.960 [2024-11-20 09:22:23.963737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.960 [2024-11-20 09:22:23.963852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:25:28.960 [2024-11-20 09:22:23.963886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:25:28.960 [2024-11-20 09:22:23.963902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.960 [2024-11-20 09:22:23.963982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.303 ms, result 0 00:25:28.960 true 00:25:28.960 09:22:23 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78836 00:25:28.960 09:22:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78836 ']' 00:25:28.960 09:22:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78836 00:25:28.960 09:22:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:25:28.960 09:22:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.960 09:22:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78836 00:25:28.960 killing process with pid 78836 00:25:28.960 09:22:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.960 09:22:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.960 09:22:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78836' 00:25:28.960 09:22:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78836 00:25:28.960 09:22:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78836 00:25:30.336 [2024-11-20 09:22:25.275442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.275562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:30.336 [2024-11-20 09:22:25.275589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:30.336 [2024-11-20 09:22:25.275606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.336 [2024-11-20 09:22:25.275664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:30.336 [2024-11-20 09:22:25.280068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.280177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:30.336 [2024-11-20 09:22:25.280213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.347 ms 00:25:30.336 [2024-11-20 09:22:25.280228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.336 [2024-11-20 09:22:25.280720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.280753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:30.336 [2024-11-20 09:22:25.280775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:25:30.336 [2024-11-20 09:22:25.280789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.336 [2024-11-20 09:22:25.285036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.285160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:30.336 [2024-11-20 09:22:25.285194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:25:30.336 [2024-11-20 09:22:25.285209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.336 [2024-11-20 09:22:25.292821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.292943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:30.336 [2024-11-20 09:22:25.292973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.520 ms 00:25:30.336 [2024-11-20 09:22:25.292988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.336 [2024-11-20 09:22:25.308625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.308763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:30.336 [2024-11-20 09:22:25.308798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.383 ms 00:25:30.336 [2024-11-20 09:22:25.308834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.336 [2024-11-20 09:22:25.319926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.336 [2024-11-20 09:22:25.320053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:30.336 [2024-11-20 09:22:25.320092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.873 ms 00:25:30.337 [2024-11-20 09:22:25.320107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.337 [2024-11-20 09:22:25.320406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.337 [2024-11-20 09:22:25.320429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:30.337 [2024-11-20 09:22:25.320448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:25:30.337 [2024-11-20 09:22:25.320462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.337 [2024-11-20 09:22:25.336472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.337 [2024-11-20 09:22:25.336595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:30.337 [2024-11-20 09:22:25.336625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.943 ms 00:25:30.337 [2024-11-20 09:22:25.336640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.337 [2024-11-20 09:22:25.352180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.337 [2024-11-20 09:22:25.352302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:30.337 [2024-11-20 09:22:25.352355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.289 ms 00:25:30.337 [2024-11-20 09:22:25.352373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.337 [2024-11-20 09:22:25.367728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.337 [2024-11-20 09:22:25.367887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:30.337 [2024-11-20 09:22:25.367934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.120 ms 00:25:30.337 [2024-11-20 09:22:25.367951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.337 [2024-11-20 09:22:25.383223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.337 [2024-11-20 09:22:25.383348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:30.337 [2024-11-20 09:22:25.383384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.995 ms 00:25:30.337 [2024-11-20 09:22:25.383400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.337 [2024-11-20 09:22:25.383578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:30.337 [2024-11-20 09:22:25.383610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.383991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:30.337 [2024-11-20 09:22:25.384958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.384974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.384999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:30.338 [2024-11-20 09:22:25.385508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:30.338 [2024-11-20 09:22:25.385545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:25:30.338 [2024-11-20 09:22:25.385602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:30.338 [2024-11-20 09:22:25.385638] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:30.338 [2024-11-20 09:22:25.385674] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:30.338 [2024-11-20 09:22:25.385700] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:30.338 [2024-11-20 09:22:25.385715] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:30.338 [2024-11-20 09:22:25.385736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:30.338 [2024-11-20 09:22:25.385752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:30.338 [2024-11-20 09:22:25.385771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:30.338 [2024-11-20 09:22:25.385784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:30.338 [2024-11-20 09:22:25.385805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.338 [2024-11-20 09:22:25.385822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:30.338 [2024-11-20 09:22:25.385846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:25:30.338 [2024-11-20 09:22:25.385861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.338 [2024-11-20 09:22:25.407242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.338 [2024-11-20 09:22:25.407348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:30.338 [2024-11-20 09:22:25.407390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.261 ms 00:25:30.338 [2024-11-20 09:22:25.407407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.338 [2024-11-20 09:22:25.408102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.338 [2024-11-20 09:22:25.408143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:30.338 [2024-11-20 09:22:25.408170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:25:30.338 [2024-11-20 09:22:25.408191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.596 [2024-11-20 09:22:25.476733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.596 [2024-11-20 09:22:25.477169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.596 [2024-11-20 09:22:25.477216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.596 [2024-11-20 09:22:25.477233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.596 [2024-11-20 09:22:25.477482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.596 [2024-11-20 09:22:25.477504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.596 [2024-11-20 09:22:25.477524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.596 [2024-11-20 09:22:25.477544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.596 [2024-11-20 09:22:25.477702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.596 [2024-11-20 09:22:25.477726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.596 [2024-11-20 09:22:25.477749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.596 [2024-11-20 09:22:25.477763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.596 [2024-11-20 09:22:25.477799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.596 [2024-11-20 09:22:25.477815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.596 [2024-11-20 09:22:25.477833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.596 [2024-11-20 09:22:25.477847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.596 [2024-11-20 09:22:25.607876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.596 [2024-11-20 09:22:25.608328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.596 [2024-11-20 09:22:25.608391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.596 [2024-11-20 09:22:25.608410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.716192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.716307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.855 [2024-11-20 09:22:25.716350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.716371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.716576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.716597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.855 [2024-11-20 09:22:25.716622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.716636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.716721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.716740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.855 [2024-11-20 09:22:25.716759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.716774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.716936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.716957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.855 [2024-11-20 09:22:25.716976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.716990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.717054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.717073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:30.855 [2024-11-20 09:22:25.717092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.717106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.717173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.717194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.855 [2024-11-20 09:22:25.717216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.717230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.717302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:30.855 [2024-11-20 09:22:25.717321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.855 [2024-11-20 09:22:25.717339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:30.855 [2024-11-20 09:22:25.717353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.855 [2024-11-20 09:22:25.717569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 442.092 ms, result 0 00:25:31.811 09:22:26 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:32.068 [2024-11-20 09:22:26.935915] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:32.068 [2024-11-20 09:22:26.936141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78911 ] 00:25:32.068 [2024-11-20 09:22:27.118896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.326 [2024-11-20 09:22:27.263024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.582 [2024-11-20 09:22:27.652367] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:32.582 [2024-11-20 09:22:27.652993] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:32.841 [2024-11-20 09:22:27.821783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.821887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:32.841 [2024-11-20 09:22:27.821912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:32.841 [2024-11-20 09:22:27.821926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.826058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.826141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.841 [2024-11-20 09:22:27.826183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:25:32.841 [2024-11-20 09:22:27.826197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.826594] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:32.841 [2024-11-20 09:22:27.827840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:32.841 [2024-11-20 09:22:27.828071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.828093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.841 [2024-11-20 09:22:27.828109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:25:32.841 [2024-11-20 09:22:27.828124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.830545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:32.841 [2024-11-20 09:22:27.851013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.851352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:32.841 [2024-11-20 09:22:27.851389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.465 ms 00:25:32.841 [2024-11-20 09:22:27.851404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.851638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.851698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:32.841 [2024-11-20 09:22:27.851722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:32.841 [2024-11-20 09:22:27.851743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.863052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.863145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.841 [2024-11-20 09:22:27.863168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.228 ms 00:25:32.841 [2024-11-20 09:22:27.863181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.863430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.863455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.841 [2024-11-20 09:22:27.863471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:32.841 [2024-11-20 09:22:27.863484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.863530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.863554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:32.841 [2024-11-20 09:22:27.863569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:32.841 [2024-11-20 09:22:27.863581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.863623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:25:32.841 [2024-11-20 09:22:27.869469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.869559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.841 [2024-11-20 09:22:27.869581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.858 ms 00:25:32.841 [2024-11-20 09:22:27.869594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.869815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.869842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:32.841 [2024-11-20 09:22:27.869858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:32.841 [2024-11-20 09:22:27.869871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.869912] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:32.841 [2024-11-20 09:22:27.869955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:32.841 [2024-11-20 09:22:27.870003] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:32.841 [2024-11-20 09:22:27.870026] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:32.841 [2024-11-20 09:22:27.870179] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:32.841 [2024-11-20 09:22:27.870200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:32.841 [2024-11-20 09:22:27.870217] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:32.841 [2024-11-20 09:22:27.870234] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:32.841 [2024-11-20 09:22:27.870256] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:32.841 [2024-11-20 09:22:27.870269] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:25:32.841 [2024-11-20 09:22:27.870283] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:32.841 [2024-11-20 09:22:27.870295] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:32.841 [2024-11-20 09:22:27.870308] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:32.841 [2024-11-20 09:22:27.870321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.870335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:32.841 [2024-11-20 09:22:27.870348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:25:32.841 [2024-11-20 09:22:27.870361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.870471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.841 [2024-11-20 09:22:27.870489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:32.841 [2024-11-20 09:22:27.870509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:32.841 [2024-11-20 09:22:27.870522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.841 [2024-11-20 09:22:27.870667] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:32.841 [2024-11-20 09:22:27.870696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:32.841 [2024-11-20 09:22:27.870712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.841 [2024-11-20 09:22:27.870726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.841 [2024-11-20 09:22:27.870739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:32.841 [2024-11-20 09:22:27.870752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:32.841 [2024-11-20 09:22:27.870765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:25:32.841 [2024-11-20 09:22:27.870777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:32.841 [2024-11-20 09:22:27.870789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:25:32.841 [2024-11-20 09:22:27.870801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.841 [2024-11-20 09:22:27.870831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:32.841 [2024-11-20 09:22:27.870842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:25:32.841 [2024-11-20 09:22:27.870853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.841 [2024-11-20 09:22:27.870882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:32.841 [2024-11-20 09:22:27.870894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:25:32.841 [2024-11-20 09:22:27.870906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.841 [2024-11-20 09:22:27.870917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:32.841 [2024-11-20 09:22:27.870928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:25:32.841 [2024-11-20 09:22:27.870939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.841 [2024-11-20 09:22:27.870951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:32.841 [2024-11-20 09:22:27.870963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:25:32.841 [2024-11-20 09:22:27.870975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.841 [2024-11-20 09:22:27.870986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:32.841 [2024-11-20 09:22:27.870998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:25:32.841 [2024-11-20 09:22:27.871010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.841 [2024-11-20 09:22:27.871031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:32.841 [2024-11-20 09:22:27.871045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:25:32.841 [2024-11-20 09:22:27.871057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.841 [2024-11-20 09:22:27.871070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:32.841 [2024-11-20 09:22:27.871082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:25:32.842 [2024-11-20 09:22:27.871094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.842 [2024-11-20 09:22:27.871106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:32.842 [2024-11-20 09:22:27.871118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:25:32.842 [2024-11-20 09:22:27.871129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.842 [2024-11-20 09:22:27.871141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:32.842 [2024-11-20 09:22:27.871153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:25:32.842 [2024-11-20 09:22:27.871164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.842 [2024-11-20 09:22:27.871186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:32.842 [2024-11-20 09:22:27.871198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:25:32.842 [2024-11-20 09:22:27.871211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.842 [2024-11-20 09:22:27.871223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:32.842 [2024-11-20 09:22:27.871235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:25:32.842 [2024-11-20 09:22:27.871247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.842 [2024-11-20 09:22:27.871258] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:32.842 [2024-11-20 09:22:27.871272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:32.842 [2024-11-20 09:22:27.871285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.842 [2024-11-20 09:22:27.871304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.842 [2024-11-20 09:22:27.871318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:32.842 [2024-11-20 09:22:27.871330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:32.842 [2024-11-20 09:22:27.871342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:32.842 [2024-11-20 09:22:27.871354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:32.842 [2024-11-20 09:22:27.871366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:32.842 [2024-11-20 09:22:27.871378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:32.842 [2024-11-20 09:22:27.871392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:32.842 [2024-11-20 09:22:27.871408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:25:32.842 [2024-11-20 09:22:27.871436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:25:32.842 [2024-11-20 09:22:27.871451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:25:32.842 [2024-11-20 09:22:27.871464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:25:32.842 [2024-11-20 09:22:27.871477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:25:32.842 [2024-11-20 09:22:27.871490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:25:32.842 [2024-11-20 09:22:27.871502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:25:32.842 [2024-11-20 09:22:27.871515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:25:32.842 [2024-11-20 09:22:27.871527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:25:32.842 [2024-11-20 09:22:27.871539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:25:32.842 [2024-11-20 09:22:27.871604] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:32.842 [2024-11-20 09:22:27.871618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:32.842 [2024-11-20 09:22:27.871645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:32.842 [2024-11-20 09:22:27.871659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:32.842 [2024-11-20 09:22:27.871686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:32.842 [2024-11-20 09:22:27.871702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.842 [2024-11-20 09:22:27.871725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:32.842 [2024-11-20 09:22:27.871745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:25:32.842 [2024-11-20 09:22:27.871765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.842 [2024-11-20 09:22:27.916902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.842 [2024-11-20 09:22:27.917001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:32.842 [2024-11-20 09:22:27.917025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.044 ms 00:25:32.842 [2024-11-20 09:22:27.917039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.842 [2024-11-20 09:22:27.917316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.842 [2024-11-20 09:22:27.917338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:32.842 [2024-11-20 09:22:27.917355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:32.842 [2024-11-20 09:22:27.917368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.100 [2024-11-20 09:22:27.997962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.100 [2024-11-20 09:22:27.998106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.100 [2024-11-20 09:22:27.998183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.551 ms 00:25:33.100 [2024-11-20 09:22:27.998211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.100 [2024-11-20 09:22:27.998447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.100 [2024-11-20 09:22:27.998480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.100 [2024-11-20 09:22:27.998507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:33.100 [2024-11-20 09:22:27.998536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.100 [2024-11-20 09:22:27.999342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.100 [2024-11-20 09:22:27.999383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.100 [2024-11-20 09:22:27.999408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:25:33.101 [2024-11-20 09:22:27.999444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:27.999743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:27.999847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.101 [2024-11-20 09:22:27.999882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:25:33.101 [2024-11-20 09:22:27.999904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:28.030056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:28.030217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.101 [2024-11-20 09:22:28.030259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.078 ms 00:25:33.101 [2024-11-20 09:22:28.030283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:28.059153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:33.101 [2024-11-20 09:22:28.059465] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:33.101 [2024-11-20 09:22:28.059498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:28.059515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:33.101 [2024-11-20 09:22:28.059537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.735 ms 00:25:33.101 [2024-11-20 09:22:28.059551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:28.093848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:28.094138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:33.101 [2024-11-20 09:22:28.094213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.976 ms 00:25:33.101 [2024-11-20 09:22:28.094241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:28.114235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:28.114334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:33.101 [2024-11-20 09:22:28.114369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.739 ms 00:25:33.101 [2024-11-20 09:22:28.114390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:28.134720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:28.134890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:33.101 [2024-11-20 09:22:28.134930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.066 ms 00:25:33.101 [2024-11-20 09:22:28.134958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.101 [2024-11-20 09:22:28.136634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.101 [2024-11-20 09:22:28.136704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:33.101 [2024-11-20 09:22:28.136734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:25:33.101 [2024-11-20 09:22:28.136758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.258097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.258281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:33.359 [2024-11-20 09:22:28.258320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.264 ms 00:25:33.359 [2024-11-20 09:22:28.258345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.282108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:25:33.359 [2024-11-20 09:22:28.311075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.311213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:33.359 [2024-11-20 09:22:28.311242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.304 ms 00:25:33.359 [2024-11-20 09:22:28.311279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.311570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.311595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:33.359 [2024-11-20 09:22:28.311611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:33.359 [2024-11-20 09:22:28.311625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.311743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.311765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:33.359 [2024-11-20 09:22:28.311780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:33.359 [2024-11-20 09:22:28.311800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.311843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.311860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:33.359 [2024-11-20 09:22:28.311874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:33.359 [2024-11-20 09:22:28.311886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.311942] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:33.359 [2024-11-20 09:22:28.311961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.311974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:33.359 [2024-11-20 09:22:28.311987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:33.359 [2024-11-20 09:22:28.311999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.351027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.351161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:33.359 [2024-11-20 09:22:28.351189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.987 ms 00:25:33.359 [2024-11-20 09:22:28.351203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.351582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.359 [2024-11-20 09:22:28.351608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:33.359 [2024-11-20 09:22:28.351624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:33.359 [2024-11-20 09:22:28.351640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.359 [2024-11-20 09:22:28.353427] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:33.359 [2024-11-20 09:22:28.359803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 531.177 ms, result 0 00:25:33.359 [2024-11-20 09:22:28.361448] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.359 [2024-11-20 09:22:28.380778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:34.734  [2024-11-20T09:22:30.787Z] Copying: 25/256 [MB] (25 MBps) [2024-11-20T09:22:31.722Z] Copying: 46/256 [MB] (21 MBps) [2024-11-20T09:22:32.657Z] Copying: 71/256 [MB] (24 MBps) [2024-11-20T09:22:33.592Z] Copying: 95/256 [MB] (24 MBps) [2024-11-20T09:22:34.528Z] Copying: 118/256 [MB] (22 MBps) [2024-11-20T09:22:35.903Z] Copying: 140/256 [MB] (21 MBps) [2024-11-20T09:22:36.470Z] Copying: 161/256 [MB] (20 MBps) [2024-11-20T09:22:37.882Z] Copying: 184/256 [MB] (23 MBps) [2024-11-20T09:22:38.830Z] Copying: 207/256 [MB] (23 MBps) [2024-11-20T09:22:39.762Z] Copying: 229/256 [MB] (21 MBps) [2024-11-20T09:22:39.762Z] Copying: 253/256 [MB] (24 MBps) [2024-11-20T09:22:39.762Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-20 09:22:39.586555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.642 [2024-11-20 09:22:39.603080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.603211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:44.642 [2024-11-20 09:22:39.603240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:44.642 [2024-11-20 09:22:39.603286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.603335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:25:44.642 [2024-11-20 09:22:39.607440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.607507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:44.642 [2024-11-20 09:22:39.607526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:25:44.642 [2024-11-20 09:22:39.607540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.607920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.607942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:44.642 [2024-11-20 09:22:39.607956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:25:44.642 [2024-11-20 09:22:39.607968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.611734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.611812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:44.642 [2024-11-20 09:22:39.611829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.739 ms 00:25:44.642 [2024-11-20 09:22:39.611842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.619351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.619518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:44.642 [2024-11-20 09:22:39.619542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.445 ms 00:25:44.642 [2024-11-20 09:22:39.619556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.658051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.658558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:44.642 [2024-11-20 09:22:39.658596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.281 ms 00:25:44.642 [2024-11-20 09:22:39.658617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.680744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.680863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:44.642 [2024-11-20 09:22:39.680909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.929 ms 00:25:44.642 [2024-11-20 09:22:39.680923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.681215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.681240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.642 [2024-11-20 09:22:39.681255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:44.642 [2024-11-20 09:22:39.681269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.719887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.720328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.642 [2024-11-20 09:22:39.720362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.563 ms 00:25:44.642 [2024-11-20 09:22:39.720377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.642 [2024-11-20 09:22:39.757888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.642 [2024-11-20 09:22:39.758361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.642 [2024-11-20 09:22:39.758399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.386 ms 00:25:44.642 [2024-11-20 09:22:39.758414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.901 [2024-11-20 09:22:39.797015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.901 [2024-11-20 09:22:39.797504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.901 [2024-11-20 09:22:39.797539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.440 ms 00:25:44.901 [2024-11-20 09:22:39.797556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.901 [2024-11-20 09:22:39.837082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.901 [2024-11-20 09:22:39.837198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.901 [2024-11-20 09:22:39.837224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.317 ms 00:25:44.901 [2024-11-20 09:22:39.837238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.901 [2024-11-20 09:22:39.837405] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.901 [2024-11-20 09:22:39.837442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.837997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.902 [2024-11-20 09:22:39.838385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.903 [2024-11-20 09:22:39.838941] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.903 [2024-11-20 09:22:39.838958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e0f82ac-54e0-4f88-a0e4-9e19270c421c 00:25:44.903 [2024-11-20 09:22:39.838975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:44.903 [2024-11-20 09:22:39.838987] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:44.903 [2024-11-20 09:22:39.839002] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:44.903 [2024-11-20 09:22:39.839016] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:44.903 [2024-11-20 09:22:39.839031] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.903 [2024-11-20 09:22:39.839046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.903 [2024-11-20 09:22:39.839068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.903 [2024-11-20 09:22:39.839079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.903 [2024-11-20 09:22:39.839090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.903 [2024-11-20 09:22:39.839104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.903 [2024-11-20 09:22:39.839119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.903 [2024-11-20 09:22:39.839135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:25:44.904 [2024-11-20 09:22:39.839149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.904 [2024-11-20 09:22:39.859795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.904 [2024-11-20 09:22:39.859914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:44.904 [2024-11-20 09:22:39.859937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.585 ms 00:25:44.904 [2024-11-20 09:22:39.859952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.904 [2024-11-20 09:22:39.860550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.904 [2024-11-20 09:22:39.860576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:44.904 [2024-11-20 09:22:39.860593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:25:44.904 [2024-11-20 09:22:39.860605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.904 [2024-11-20 09:22:39.913621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.904 [2024-11-20 09:22:39.913828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.904 [2024-11-20 09:22:39.913855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.904 [2024-11-20 09:22:39.913900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.904 [2024-11-20 09:22:39.914294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.904 [2024-11-20 09:22:39.914320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.904 [2024-11-20 09:22:39.914336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.904 [2024-11-20 09:22:39.914353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.904 [2024-11-20 09:22:39.914525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.904 [2024-11-20 09:22:39.914550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.904 [2024-11-20 09:22:39.914586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.904 [2024-11-20 09:22:39.914601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.904 [2024-11-20 09:22:39.914642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.904 [2024-11-20 09:22:39.914661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.904 [2024-11-20 09:22:39.914719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.904 [2024-11-20 09:22:39.914734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.038713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.039426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.163 [2024-11-20 09:22:40.039471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.039488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.148848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.149007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.163 [2024-11-20 09:22:40.149037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.149056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.149311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.149335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.163 [2024-11-20 09:22:40.149351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.149365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.149416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.149449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.163 [2024-11-20 09:22:40.149463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.149475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.149616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.149638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.163 [2024-11-20 09:22:40.149698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.149715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.149779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.149799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.163 [2024-11-20 09:22:40.149822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.149837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.149903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.149920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.163 [2024-11-20 09:22:40.149934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.149948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.150029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.163 [2024-11-20 09:22:40.150053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.163 [2024-11-20 09:22:40.150067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.163 [2024-11-20 09:22:40.150080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.163 [2024-11-20 09:22:40.150353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.285 ms, result 0 00:25:46.097 00:25:46.097 00:25:46.355 09:22:41 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:46.921 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:25:46.921 09:22:41 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78836 00:25:46.921 09:22:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78836 ']' 00:25:46.921 09:22:41 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78836 00:25:46.921 Process with pid 78836 is not found 00:25:46.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78836) - No such process 00:25:46.921 09:22:41 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78836 is not found' 00:25:46.921 ************************************ 00:25:46.921 END TEST ftl_trim 00:25:46.921 ************************************ 00:25:46.921 00:25:46.921 real 1m15.839s 00:25:46.921 user 1m45.767s 00:25:46.921 sys 0m8.851s 00:25:46.921 09:22:41 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:46.921 09:22:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:25:46.921 09:22:41 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:46.921 09:22:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:46.921 09:22:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.921 09:22:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:46.921 ************************************ 00:25:46.921 START TEST ftl_restore 00:25:46.921 ************************************ 00:25:46.921 09:22:41 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:25:46.921 * Looking for test storage... 00:25:47.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.180 09:22:42 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.180 --rc genhtml_branch_coverage=1 00:25:47.180 --rc genhtml_function_coverage=1 00:25:47.180 --rc genhtml_legend=1 00:25:47.180 --rc geninfo_all_blocks=1 00:25:47.180 --rc geninfo_unexecuted_blocks=1 00:25:47.180 00:25:47.180 ' 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.180 --rc genhtml_branch_coverage=1 00:25:47.180 --rc genhtml_function_coverage=1 00:25:47.180 --rc genhtml_legend=1 00:25:47.180 --rc geninfo_all_blocks=1 00:25:47.180 --rc geninfo_unexecuted_blocks=1 00:25:47.180 00:25:47.180 ' 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.180 --rc genhtml_branch_coverage=1 00:25:47.180 --rc genhtml_function_coverage=1 00:25:47.180 --rc genhtml_legend=1 00:25:47.180 --rc geninfo_all_blocks=1 00:25:47.180 --rc geninfo_unexecuted_blocks=1 00:25:47.180 00:25:47.180 ' 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.180 --rc genhtml_branch_coverage=1 00:25:47.180 --rc genhtml_function_coverage=1 00:25:47.180 --rc genhtml_legend=1 00:25:47.180 --rc geninfo_all_blocks=1 00:25:47.180 --rc geninfo_unexecuted_blocks=1 00:25:47.180 00:25:47.180 ' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zPdYutkv77 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79126 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.180 09:22:42 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79126 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79126 ']' 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.180 09:22:42 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:47.438 [2024-11-20 09:22:42.323201] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:47.439 [2024-11-20 09:22:42.323686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79126 ] 00:25:47.439 [2024-11-20 09:22:42.500968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.696 [2024-11-20 09:22:42.637558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.629 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.629 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:25:48.629 09:22:43 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:48.629 09:22:43 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:25:48.629 09:22:43 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:48.629 09:22:43 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:25:48.629 09:22:43 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:25:48.629 09:22:43 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:48.888 09:22:43 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:48.888 09:22:43 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:25:48.888 09:22:43 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:48.888 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:48.888 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:48.888 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:48.888 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:48.888 09:22:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:49.146 { 00:25:49.146 "name": "nvme0n1", 00:25:49.146 "aliases": [ 00:25:49.146 "32acacd4-25d8-4610-ae2b-7b7426007342" 00:25:49.146 ], 00:25:49.146 "product_name": "NVMe disk", 00:25:49.146 "block_size": 4096, 00:25:49.146 "num_blocks": 1310720, 00:25:49.146 "uuid": "32acacd4-25d8-4610-ae2b-7b7426007342", 00:25:49.146 "numa_id": -1, 00:25:49.146 "assigned_rate_limits": { 00:25:49.146 "rw_ios_per_sec": 0, 00:25:49.146 "rw_mbytes_per_sec": 0, 00:25:49.146 "r_mbytes_per_sec": 0, 00:25:49.146 "w_mbytes_per_sec": 0 00:25:49.146 }, 00:25:49.146 "claimed": true, 00:25:49.146 "claim_type": "read_many_write_one", 00:25:49.146 "zoned": false, 00:25:49.146 "supported_io_types": { 00:25:49.146 "read": true, 00:25:49.146 "write": true, 00:25:49.146 "unmap": true, 00:25:49.146 "flush": true, 00:25:49.146 "reset": true, 00:25:49.146 "nvme_admin": true, 00:25:49.146 "nvme_io": true, 00:25:49.146 "nvme_io_md": false, 00:25:49.146 "write_zeroes": true, 00:25:49.146 "zcopy": false, 00:25:49.146 "get_zone_info": false, 00:25:49.146 "zone_management": false, 00:25:49.146 "zone_append": false, 00:25:49.146 "compare": true, 00:25:49.146 "compare_and_write": false, 00:25:49.146 "abort": true, 00:25:49.146 "seek_hole": false, 00:25:49.146 "seek_data": false, 00:25:49.146 "copy": true, 00:25:49.146 "nvme_iov_md": false 00:25:49.146 }, 00:25:49.146 "driver_specific": { 00:25:49.146 "nvme": [ 00:25:49.146 { 00:25:49.146 "pci_address": "0000:00:11.0", 00:25:49.146 "trid": { 00:25:49.146 "trtype": "PCIe", 00:25:49.146 "traddr": "0000:00:11.0" 00:25:49.146 }, 00:25:49.146 "ctrlr_data": { 00:25:49.146 "cntlid": 0, 00:25:49.146 "vendor_id": "0x1b36", 00:25:49.146 "model_number": "QEMU NVMe Ctrl", 00:25:49.146 "serial_number": "12341", 00:25:49.146 "firmware_revision": "8.0.0", 00:25:49.146 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:49.146 "oacs": { 00:25:49.146 "security": 0, 00:25:49.146 "format": 1, 00:25:49.146 "firmware": 0, 00:25:49.146 "ns_manage": 1 00:25:49.146 }, 00:25:49.146 "multi_ctrlr": false, 00:25:49.146 "ana_reporting": false 00:25:49.146 }, 00:25:49.146 "vs": { 00:25:49.146 "nvme_version": "1.4" 00:25:49.146 }, 00:25:49.146 "ns_data": { 00:25:49.146 "id": 1, 00:25:49.146 "can_share": false 00:25:49.146 } 00:25:49.146 } 00:25:49.146 ], 00:25:49.146 "mp_policy": "active_passive" 00:25:49.146 } 00:25:49.146 } 00:25:49.146 ]' 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:49.146 09:22:44 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:25:49.146 09:22:44 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:25:49.146 09:22:44 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:49.146 09:22:44 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:25:49.146 09:22:44 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:49.146 09:22:44 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:49.714 09:22:44 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b982fba5-de42-497c-9388-bbc9a5221875 00:25:49.714 09:22:44 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:25:49.714 09:22:44 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b982fba5-de42-497c-9388-bbc9a5221875 00:25:49.973 09:22:44 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:50.260 09:22:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7 00:25:50.260 09:22:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:25:50.829 09:22:45 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:50.829 09:22:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:50.829 09:22:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:50.829 09:22:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:50.829 09:22:45 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:50.829 09:22:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:51.088 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:51.088 { 00:25:51.088 "name": "36adcc77-b3c4-45f0-aaeb-db5b518615fc", 00:25:51.088 "aliases": [ 00:25:51.088 "lvs/nvme0n1p0" 00:25:51.088 ], 00:25:51.088 "product_name": "Logical Volume", 00:25:51.088 "block_size": 4096, 00:25:51.088 "num_blocks": 26476544, 00:25:51.088 "uuid": "36adcc77-b3c4-45f0-aaeb-db5b518615fc", 00:25:51.088 "assigned_rate_limits": { 00:25:51.088 "rw_ios_per_sec": 0, 00:25:51.088 "rw_mbytes_per_sec": 0, 00:25:51.088 "r_mbytes_per_sec": 0, 00:25:51.089 "w_mbytes_per_sec": 0 00:25:51.089 }, 00:25:51.089 "claimed": false, 00:25:51.089 "zoned": false, 00:25:51.089 "supported_io_types": { 00:25:51.089 "read": true, 00:25:51.089 "write": true, 00:25:51.089 "unmap": true, 00:25:51.089 "flush": false, 00:25:51.089 "reset": true, 00:25:51.089 "nvme_admin": false, 00:25:51.089 "nvme_io": false, 00:25:51.089 "nvme_io_md": false, 00:25:51.089 "write_zeroes": true, 00:25:51.089 "zcopy": false, 00:25:51.089 "get_zone_info": false, 00:25:51.089 "zone_management": false, 00:25:51.089 "zone_append": false, 00:25:51.089 "compare": false, 00:25:51.089 "compare_and_write": false, 00:25:51.089 "abort": false, 00:25:51.089 "seek_hole": true, 00:25:51.089 "seek_data": true, 00:25:51.089 "copy": false, 00:25:51.089 "nvme_iov_md": false 00:25:51.089 }, 00:25:51.089 "driver_specific": { 00:25:51.089 "lvol": { 00:25:51.089 "lvol_store_uuid": "1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7", 00:25:51.089 "base_bdev": "nvme0n1", 00:25:51.089 "thin_provision": true, 00:25:51.089 "num_allocated_clusters": 0, 00:25:51.089 "snapshot": false, 00:25:51.089 "clone": false, 00:25:51.089 "esnap_clone": false 00:25:51.089 } 00:25:51.089 } 00:25:51.089 } 00:25:51.089 ]' 00:25:51.089 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:51.089 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:51.089 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:51.348 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:51.348 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:51.348 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:51.348 09:22:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:25:51.348 09:22:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:25:51.348 09:22:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:51.606 09:22:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:51.606 09:22:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:51.606 09:22:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:51.606 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:51.606 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:51.606 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:51.606 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:51.606 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:51.864 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:51.864 { 00:25:51.864 "name": "36adcc77-b3c4-45f0-aaeb-db5b518615fc", 00:25:51.864 "aliases": [ 00:25:51.864 "lvs/nvme0n1p0" 00:25:51.864 ], 00:25:51.864 "product_name": "Logical Volume", 00:25:51.864 "block_size": 4096, 00:25:51.864 "num_blocks": 26476544, 00:25:51.864 "uuid": "36adcc77-b3c4-45f0-aaeb-db5b518615fc", 00:25:51.864 "assigned_rate_limits": { 00:25:51.864 "rw_ios_per_sec": 0, 00:25:51.864 "rw_mbytes_per_sec": 0, 00:25:51.864 "r_mbytes_per_sec": 0, 00:25:51.864 "w_mbytes_per_sec": 0 00:25:51.864 }, 00:25:51.864 "claimed": false, 00:25:51.864 "zoned": false, 00:25:51.864 "supported_io_types": { 00:25:51.864 "read": true, 00:25:51.864 "write": true, 00:25:51.864 "unmap": true, 00:25:51.864 "flush": false, 00:25:51.864 "reset": true, 00:25:51.864 "nvme_admin": false, 00:25:51.864 "nvme_io": false, 00:25:51.864 "nvme_io_md": false, 00:25:51.864 "write_zeroes": true, 00:25:51.864 "zcopy": false, 00:25:51.864 "get_zone_info": false, 00:25:51.864 "zone_management": false, 00:25:51.864 "zone_append": false, 00:25:51.864 "compare": false, 00:25:51.864 "compare_and_write": false, 00:25:51.864 "abort": false, 00:25:51.864 "seek_hole": true, 00:25:51.864 "seek_data": true, 00:25:51.864 "copy": false, 00:25:51.864 "nvme_iov_md": false 00:25:51.864 }, 00:25:51.864 "driver_specific": { 00:25:51.864 "lvol": { 00:25:51.864 "lvol_store_uuid": "1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7", 00:25:51.864 "base_bdev": "nvme0n1", 00:25:51.864 "thin_provision": true, 00:25:51.864 "num_allocated_clusters": 0, 00:25:51.864 "snapshot": false, 00:25:51.864 "clone": false, 00:25:51.864 "esnap_clone": false 00:25:51.864 } 00:25:51.864 } 00:25:51.864 } 00:25:51.864 ]' 00:25:51.864 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:51.864 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:51.864 09:22:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.122 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:52.122 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:52.122 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:52.122 09:22:47 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:25:52.122 09:22:47 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:52.380 09:22:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:52.381 09:22:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:52.381 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:52.381 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:52.381 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:25:52.381 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:25:52.381 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 36adcc77-b3c4-45f0-aaeb-db5b518615fc 00:25:52.640 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:52.640 { 00:25:52.640 "name": "36adcc77-b3c4-45f0-aaeb-db5b518615fc", 00:25:52.640 "aliases": [ 00:25:52.640 "lvs/nvme0n1p0" 00:25:52.640 ], 00:25:52.640 "product_name": "Logical Volume", 00:25:52.640 "block_size": 4096, 00:25:52.640 "num_blocks": 26476544, 00:25:52.640 "uuid": "36adcc77-b3c4-45f0-aaeb-db5b518615fc", 00:25:52.640 "assigned_rate_limits": { 00:25:52.640 "rw_ios_per_sec": 0, 00:25:52.640 "rw_mbytes_per_sec": 0, 00:25:52.640 "r_mbytes_per_sec": 0, 00:25:52.640 "w_mbytes_per_sec": 0 00:25:52.640 }, 00:25:52.640 "claimed": false, 00:25:52.640 "zoned": false, 00:25:52.640 "supported_io_types": { 00:25:52.640 "read": true, 00:25:52.640 "write": true, 00:25:52.640 "unmap": true, 00:25:52.640 "flush": false, 00:25:52.640 "reset": true, 00:25:52.640 "nvme_admin": false, 00:25:52.640 "nvme_io": false, 00:25:52.640 "nvme_io_md": false, 00:25:52.640 "write_zeroes": true, 00:25:52.640 "zcopy": false, 00:25:52.640 "get_zone_info": false, 00:25:52.640 "zone_management": false, 00:25:52.640 "zone_append": false, 00:25:52.640 "compare": false, 00:25:52.640 "compare_and_write": false, 00:25:52.640 "abort": false, 00:25:52.640 "seek_hole": true, 00:25:52.640 "seek_data": true, 00:25:52.640 "copy": false, 00:25:52.640 "nvme_iov_md": false 00:25:52.640 }, 00:25:52.640 "driver_specific": { 00:25:52.640 "lvol": { 00:25:52.640 "lvol_store_uuid": "1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7", 00:25:52.640 "base_bdev": "nvme0n1", 00:25:52.640 "thin_provision": true, 00:25:52.640 "num_allocated_clusters": 0, 00:25:52.640 "snapshot": false, 00:25:52.640 "clone": false, 00:25:52.640 "esnap_clone": false 00:25:52.640 } 00:25:52.640 } 00:25:52.640 } 00:25:52.640 ]' 00:25:52.640 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:52.640 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:25:52.640 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:52.898 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:52.898 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:52.898 09:22:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 36adcc77-b3c4-45f0-aaeb-db5b518615fc --l2p_dram_limit 10' 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:25:52.898 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:25:52.898 09:22:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 36adcc77-b3c4-45f0-aaeb-db5b518615fc --l2p_dram_limit 10 -c nvc0n1p0 00:25:53.157 [2024-11-20 09:22:48.036877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.036967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:53.157 [2024-11-20 09:22:48.036996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:53.157 [2024-11-20 09:22:48.037011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.037116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.037136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.157 [2024-11-20 09:22:48.037153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:53.157 [2024-11-20 09:22:48.037166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.037210] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:53.157 [2024-11-20 09:22:48.038441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:53.157 [2024-11-20 09:22:48.038489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.038505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.157 [2024-11-20 09:22:48.038522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.289 ms 00:25:53.157 [2024-11-20 09:22:48.038534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.038710] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 26796cd3-a23c-498c-9a73-5c5d333b72c6 00:25:53.157 [2024-11-20 09:22:48.040667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.040714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:53.157 [2024-11-20 09:22:48.040732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:53.157 [2024-11-20 09:22:48.040748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.051670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.051759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.157 [2024-11-20 09:22:48.051785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.799 ms 00:25:53.157 [2024-11-20 09:22:48.051802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.051998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.052027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.157 [2024-11-20 09:22:48.052044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:25:53.157 [2024-11-20 09:22:48.052065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.052194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.052220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:53.157 [2024-11-20 09:22:48.052235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:53.157 [2024-11-20 09:22:48.052254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.052295] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:53.157 [2024-11-20 09:22:48.058019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.058092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.157 [2024-11-20 09:22:48.058118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.731 ms 00:25:53.157 [2024-11-20 09:22:48.058132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.058223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.058241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:53.157 [2024-11-20 09:22:48.058259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:53.157 [2024-11-20 09:22:48.058272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.157 [2024-11-20 09:22:48.058347] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:53.157 [2024-11-20 09:22:48.058519] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:53.157 [2024-11-20 09:22:48.058546] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:53.157 [2024-11-20 09:22:48.058574] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:53.157 [2024-11-20 09:22:48.058593] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:53.157 [2024-11-20 09:22:48.058608] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:53.157 [2024-11-20 09:22:48.058625] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:53.157 [2024-11-20 09:22:48.058637] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:53.157 [2024-11-20 09:22:48.058683] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:53.157 [2024-11-20 09:22:48.058698] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:53.157 [2024-11-20 09:22:48.058714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.157 [2024-11-20 09:22:48.058727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:53.157 [2024-11-20 09:22:48.058756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:25:53.158 [2024-11-20 09:22:48.058784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.158 [2024-11-20 09:22:48.058890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.158 [2024-11-20 09:22:48.058907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:53.158 [2024-11-20 09:22:48.058923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:53.158 [2024-11-20 09:22:48.058935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.158 [2024-11-20 09:22:48.059063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:53.158 [2024-11-20 09:22:48.059083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:53.158 [2024-11-20 09:22:48.059100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:53.158 [2024-11-20 09:22:48.059151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:53.158 [2024-11-20 09:22:48.059193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:53.158 [2024-11-20 09:22:48.059219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:53.158 [2024-11-20 09:22:48.059231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:53.158 [2024-11-20 09:22:48.059245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:53.158 [2024-11-20 09:22:48.059256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:53.158 [2024-11-20 09:22:48.059271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:53.158 [2024-11-20 09:22:48.059283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:53.158 [2024-11-20 09:22:48.059311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:53.158 [2024-11-20 09:22:48.059356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:53.158 [2024-11-20 09:22:48.059394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:53.158 [2024-11-20 09:22:48.059434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:53.158 [2024-11-20 09:22:48.059472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:53.158 [2024-11-20 09:22:48.059513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:53.158 [2024-11-20 09:22:48.059539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:53.158 [2024-11-20 09:22:48.059551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:53.158 [2024-11-20 09:22:48.059565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:53.158 [2024-11-20 09:22:48.059577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:53.158 [2024-11-20 09:22:48.059590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:53.158 [2024-11-20 09:22:48.059602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:53.158 [2024-11-20 09:22:48.059629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:53.158 [2024-11-20 09:22:48.059643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059671] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:53.158 [2024-11-20 09:22:48.059687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:53.158 [2024-11-20 09:22:48.059700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:53.158 [2024-11-20 09:22:48.059731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:53.158 [2024-11-20 09:22:48.059749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:53.158 [2024-11-20 09:22:48.059761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:53.158 [2024-11-20 09:22:48.059776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:53.158 [2024-11-20 09:22:48.059787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:53.158 [2024-11-20 09:22:48.059803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:53.158 [2024-11-20 09:22:48.059820] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:53.158 [2024-11-20 09:22:48.059838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.059855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:53.158 [2024-11-20 09:22:48.059870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:53.158 [2024-11-20 09:22:48.059882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:53.158 [2024-11-20 09:22:48.059897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:53.158 [2024-11-20 09:22:48.059909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:53.158 [2024-11-20 09:22:48.059924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:53.158 [2024-11-20 09:22:48.059936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:53.158 [2024-11-20 09:22:48.059950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:53.158 [2024-11-20 09:22:48.059963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:53.158 [2024-11-20 09:22:48.059979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.059992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.060006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.060018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.060035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:53.158 [2024-11-20 09:22:48.060047] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:53.158 [2024-11-20 09:22:48.060064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.060077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:53.158 [2024-11-20 09:22:48.060093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:53.158 [2024-11-20 09:22:48.060105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:53.158 [2024-11-20 09:22:48.060120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:53.158 [2024-11-20 09:22:48.060133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.158 [2024-11-20 09:22:48.060149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:53.158 [2024-11-20 09:22:48.060162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:25:53.158 [2024-11-20 09:22:48.060176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.158 [2024-11-20 09:22:48.060238] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:53.158 [2024-11-20 09:22:48.060270] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:55.735 [2024-11-20 09:22:50.636091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.636192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:55.735 [2024-11-20 09:22:50.636217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2575.863 ms 00:25:55.735 [2024-11-20 09:22:50.636234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.676711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.676822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.735 [2024-11-20 09:22:50.676849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.143 ms 00:25:55.735 [2024-11-20 09:22:50.676867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.677104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.677132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:55.735 [2024-11-20 09:22:50.677148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:55.735 [2024-11-20 09:22:50.677167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.722954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.723037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:55.735 [2024-11-20 09:22:50.723059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.721 ms 00:25:55.735 [2024-11-20 09:22:50.723077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.723145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.723173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:55.735 [2024-11-20 09:22:50.723197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:55.735 [2024-11-20 09:22:50.723213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.723909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.723936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:55.735 [2024-11-20 09:22:50.723951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:25:55.735 [2024-11-20 09:22:50.723967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.724133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.724153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:55.735 [2024-11-20 09:22:50.724169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:25:55.735 [2024-11-20 09:22:50.724188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.745591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.745696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:55.735 [2024-11-20 09:22:50.745720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.372 ms 00:25:55.735 [2024-11-20 09:22:50.745736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.735 [2024-11-20 09:22:50.763049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:55.735 [2024-11-20 09:22:50.767826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.735 [2024-11-20 09:22:50.767885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:55.735 [2024-11-20 09:22:50.767912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.900 ms 00:25:55.735 [2024-11-20 09:22:50.767927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:50.859664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:50.859759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:55.994 [2024-11-20 09:22:50.859788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.638 ms 00:25:55.994 [2024-11-20 09:22:50.859804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:50.860111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:50.860137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:55.994 [2024-11-20 09:22:50.860159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:25:55.994 [2024-11-20 09:22:50.860172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:50.894504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:50.894589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:55.994 [2024-11-20 09:22:50.894618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.219 ms 00:25:55.994 [2024-11-20 09:22:50.894633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:50.927947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:50.928026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:55.994 [2024-11-20 09:22:50.928053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.167 ms 00:25:55.994 [2024-11-20 09:22:50.928067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:50.929005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:50.929190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:55.994 [2024-11-20 09:22:50.929226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:25:55.994 [2024-11-20 09:22:50.929242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:51.018741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:51.019082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:55.994 [2024-11-20 09:22:51.019128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.372 ms 00:25:55.994 [2024-11-20 09:22:51.019144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:51.053885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:51.054254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:55.994 [2024-11-20 09:22:51.054298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.533 ms 00:25:55.994 [2024-11-20 09:22:51.054316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.994 [2024-11-20 09:22:51.089337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.994 [2024-11-20 09:22:51.089815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:55.994 [2024-11-20 09:22:51.089864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.877 ms 00:25:55.994 [2024-11-20 09:22:51.089881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.253 [2024-11-20 09:22:51.124525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.253 [2024-11-20 09:22:51.124615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.253 [2024-11-20 09:22:51.124642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.489 ms 00:25:56.253 [2024-11-20 09:22:51.124674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.253 [2024-11-20 09:22:51.124800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.253 [2024-11-20 09:22:51.124819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.253 [2024-11-20 09:22:51.124841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:56.253 [2024-11-20 09:22:51.124855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.253 [2024-11-20 09:22:51.125037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.253 [2024-11-20 09:22:51.125057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.253 [2024-11-20 09:22:51.125078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:56.253 [2024-11-20 09:22:51.125091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.253 [2024-11-20 09:22:51.126502] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3089.078 ms, result 0 00:25:56.253 { 00:25:56.253 "name": "ftl0", 00:25:56.253 "uuid": "26796cd3-a23c-498c-9a73-5c5d333b72c6" 00:25:56.253 } 00:25:56.253 09:22:51 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:56.253 09:22:51 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:56.510 09:22:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:25:56.510 09:22:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:56.770 [2024-11-20 09:22:51.805991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.806100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:56.770 [2024-11-20 09:22:51.806126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:56.770 [2024-11-20 09:22:51.806156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.806222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:56.770 [2024-11-20 09:22:51.810046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.810090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:56.770 [2024-11-20 09:22:51.810111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.790 ms 00:25:56.770 [2024-11-20 09:22:51.810125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.810515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.810543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:56.770 [2024-11-20 09:22:51.810566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:25:56.770 [2024-11-20 09:22:51.810579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.813801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.813837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:56.770 [2024-11-20 09:22:51.813857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.181 ms 00:25:56.770 [2024-11-20 09:22:51.813870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.820381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.820608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:56.770 [2024-11-20 09:22:51.820668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.471 ms 00:25:56.770 [2024-11-20 09:22:51.820686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.854782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.855111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:56.770 [2024-11-20 09:22:51.855155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.934 ms 00:25:56.770 [2024-11-20 09:22:51.855171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.875891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.875985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:56.770 [2024-11-20 09:22:51.876013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.595 ms 00:25:56.770 [2024-11-20 09:22:51.876027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.770 [2024-11-20 09:22:51.876356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.770 [2024-11-20 09:22:51.876380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:56.770 [2024-11-20 09:22:51.876398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:25:56.770 [2024-11-20 09:22:51.876412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.030 [2024-11-20 09:22:51.910755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.030 [2024-11-20 09:22:51.910853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.030 [2024-11-20 09:22:51.910882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.291 ms 00:25:57.030 [2024-11-20 09:22:51.910896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.030 [2024-11-20 09:22:51.945788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.030 [2024-11-20 09:22:51.946199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.030 [2024-11-20 09:22:51.946243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.755 ms 00:25:57.030 [2024-11-20 09:22:51.946258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.030 [2024-11-20 09:22:51.980875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.030 [2024-11-20 09:22:51.981350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.030 [2024-11-20 09:22:51.981399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.485 ms 00:25:57.030 [2024-11-20 09:22:51.981415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.030 [2024-11-20 09:22:52.017711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.030 [2024-11-20 09:22:52.017850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.030 [2024-11-20 09:22:52.017887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.945 ms 00:25:57.030 [2024-11-20 09:22:52.017904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.030 [2024-11-20 09:22:52.018130] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.030 [2024-11-20 09:22:52.018203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.018993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.030 [2024-11-20 09:22:52.019513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.031 [2024-11-20 09:22:52.019961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.031 [2024-11-20 09:22:52.019983] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 26796cd3-a23c-498c-9a73-5c5d333b72c6 00:25:57.031 [2024-11-20 09:22:52.020010] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:57.031 [2024-11-20 09:22:52.020035] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:57.031 [2024-11-20 09:22:52.020049] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:57.031 [2024-11-20 09:22:52.020069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:57.031 [2024-11-20 09:22:52.020081] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.031 [2024-11-20 09:22:52.020097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.031 [2024-11-20 09:22:52.020109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.031 [2024-11-20 09:22:52.020123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.031 [2024-11-20 09:22:52.020134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.031 [2024-11-20 09:22:52.020152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.031 [2024-11-20 09:22:52.020166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.031 [2024-11-20 09:22:52.020185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:25:57.031 [2024-11-20 09:22:52.020198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.031 [2024-11-20 09:22:52.040258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.031 [2024-11-20 09:22:52.040603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.031 [2024-11-20 09:22:52.040643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.890 ms 00:25:57.031 [2024-11-20 09:22:52.040675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.031 [2024-11-20 09:22:52.041203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.031 [2024-11-20 09:22:52.041230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.031 [2024-11-20 09:22:52.041250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:25:57.031 [2024-11-20 09:22:52.041266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.031 [2024-11-20 09:22:52.100501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.031 [2024-11-20 09:22:52.100628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.031 [2024-11-20 09:22:52.100687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.031 [2024-11-20 09:22:52.100705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.031 [2024-11-20 09:22:52.100870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.031 [2024-11-20 09:22:52.100888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.031 [2024-11-20 09:22:52.100905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.031 [2024-11-20 09:22:52.100922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.031 [2024-11-20 09:22:52.101149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.031 [2024-11-20 09:22:52.101171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.031 [2024-11-20 09:22:52.101188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.031 [2024-11-20 09:22:52.101200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.031 [2024-11-20 09:22:52.101239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.031 [2024-11-20 09:22:52.101254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.031 [2024-11-20 09:22:52.101271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.031 [2024-11-20 09:22:52.101282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.224478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.224643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.290 [2024-11-20 09:22:52.224711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.224733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.345925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.346187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.290 [2024-11-20 09:22:52.346256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.346302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.346743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.346777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.290 [2024-11-20 09:22:52.346800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.346815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.346944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.346967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.290 [2024-11-20 09:22:52.346990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.347007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.347239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.347272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.290 [2024-11-20 09:22:52.347297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.347314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.347397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.347417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.290 [2024-11-20 09:22:52.347435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.347449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.347540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.347562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.290 [2024-11-20 09:22:52.347578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.347591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.347711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.290 [2024-11-20 09:22:52.347731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.290 [2024-11-20 09:22:52.347749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.290 [2024-11-20 09:22:52.347763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.290 [2024-11-20 09:22:52.348030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.953 ms, result 0 00:25:57.290 true 00:25:57.290 09:22:52 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79126 00:25:57.290 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79126 ']' 00:25:57.290 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79126 00:25:57.290 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:25:57.290 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:57.290 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79126 00:25:57.548 killing process with pid 79126 00:25:57.548 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.548 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.548 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79126' 00:25:57.548 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79126 00:25:57.548 09:22:52 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79126 00:26:00.078 09:22:54 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:05.375 262144+0 records in 00:26:05.375 262144+0 records out 00:26:05.375 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.04233 s, 213 MB/s 00:26:05.375 09:22:59 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:07.277 09:23:02 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:07.534 [2024-11-20 09:23:02.471826] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:26:07.534 [2024-11-20 09:23:02.472269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79380 ] 00:26:07.792 [2024-11-20 09:23:02.661009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.792 [2024-11-20 09:23:02.812759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.359 [2024-11-20 09:23:03.203612] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:08.359 [2024-11-20 09:23:03.203735] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:08.359 [2024-11-20 09:23:03.382310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.382865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:08.359 [2024-11-20 09:23:03.382921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:08.359 [2024-11-20 09:23:03.382940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.383087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.383113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:08.359 [2024-11-20 09:23:03.383133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:08.359 [2024-11-20 09:23:03.383147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.383185] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:08.359 [2024-11-20 09:23:03.384357] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:08.359 [2024-11-20 09:23:03.384400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.384417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:08.359 [2024-11-20 09:23:03.384431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:26:08.359 [2024-11-20 09:23:03.384444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.386732] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:08.359 [2024-11-20 09:23:03.406189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.406715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:08.359 [2024-11-20 09:23:03.406759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.451 ms 00:26:08.359 [2024-11-20 09:23:03.406777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.407026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.407052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:08.359 [2024-11-20 09:23:03.407068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:08.359 [2024-11-20 09:23:03.407082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.420888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.420993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:08.359 [2024-11-20 09:23:03.421015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.528 ms 00:26:08.359 [2024-11-20 09:23:03.421029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.421276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.421300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:08.359 [2024-11-20 09:23:03.421317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:26:08.359 [2024-11-20 09:23:03.421329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.421516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.421545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:08.359 [2024-11-20 09:23:03.421560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:26:08.359 [2024-11-20 09:23:03.421574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.421621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:08.359 [2024-11-20 09:23:03.427941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.359 [2024-11-20 09:23:03.428084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:08.359 [2024-11-20 09:23:03.428107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.336 ms 00:26:08.359 [2024-11-20 09:23:03.428138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.359 [2024-11-20 09:23:03.428290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.360 [2024-11-20 09:23:03.428307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:08.360 [2024-11-20 09:23:03.428324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:08.360 [2024-11-20 09:23:03.428339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.360 [2024-11-20 09:23:03.428438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:08.360 [2024-11-20 09:23:03.428493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:08.360 [2024-11-20 09:23:03.428546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:08.360 [2024-11-20 09:23:03.428587] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:08.360 [2024-11-20 09:23:03.428753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:08.360 [2024-11-20 09:23:03.428778] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:08.360 [2024-11-20 09:23:03.428798] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:08.360 [2024-11-20 09:23:03.428817] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:08.360 [2024-11-20 09:23:03.428835] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:08.360 [2024-11-20 09:23:03.428850] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:08.360 [2024-11-20 09:23:03.428866] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:08.360 [2024-11-20 09:23:03.428881] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:08.360 [2024-11-20 09:23:03.428896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:08.360 [2024-11-20 09:23:03.428926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.360 [2024-11-20 09:23:03.428941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:08.360 [2024-11-20 09:23:03.428957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:26:08.360 [2024-11-20 09:23:03.428972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.360 [2024-11-20 09:23:03.429084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.360 [2024-11-20 09:23:03.429103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:08.360 [2024-11-20 09:23:03.429118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:08.360 [2024-11-20 09:23:03.429133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.360 [2024-11-20 09:23:03.429279] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:08.360 [2024-11-20 09:23:03.429321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:08.360 [2024-11-20 09:23:03.429338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:08.360 [2024-11-20 09:23:03.429381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:08.360 [2024-11-20 09:23:03.429416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:08.360 [2024-11-20 09:23:03.429439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:08.360 [2024-11-20 09:23:03.429450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:08.360 [2024-11-20 09:23:03.429460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:08.360 [2024-11-20 09:23:03.429471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:08.360 [2024-11-20 09:23:03.429481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:08.360 [2024-11-20 09:23:03.429513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:08.360 [2024-11-20 09:23:03.429537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:08.360 [2024-11-20 09:23:03.429572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:08.360 [2024-11-20 09:23:03.429610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:08.360 [2024-11-20 09:23:03.429643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:08.360 [2024-11-20 09:23:03.429698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:08.360 [2024-11-20 09:23:03.429734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:08.360 [2024-11-20 09:23:03.429763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:08.360 [2024-11-20 09:23:03.429774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:08.360 [2024-11-20 09:23:03.429786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:08.360 [2024-11-20 09:23:03.429799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:08.360 [2024-11-20 09:23:03.429812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:08.360 [2024-11-20 09:23:03.429832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:08.360 [2024-11-20 09:23:03.429856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:08.360 [2024-11-20 09:23:03.429870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429882] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:08.360 [2024-11-20 09:23:03.429895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:08.360 [2024-11-20 09:23:03.429909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:08.360 [2024-11-20 09:23:03.429923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:08.360 [2024-11-20 09:23:03.429937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:08.360 [2024-11-20 09:23:03.429950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:08.360 [2024-11-20 09:23:03.429961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:08.360 [2024-11-20 09:23:03.429973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:08.360 [2024-11-20 09:23:03.429984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:08.360 [2024-11-20 09:23:03.429996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:08.361 [2024-11-20 09:23:03.430010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:08.361 [2024-11-20 09:23:03.430027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:08.361 [2024-11-20 09:23:03.430054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:08.361 [2024-11-20 09:23:03.430066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:08.361 [2024-11-20 09:23:03.430078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:08.361 [2024-11-20 09:23:03.430091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:08.361 [2024-11-20 09:23:03.430102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:08.361 [2024-11-20 09:23:03.430114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:08.361 [2024-11-20 09:23:03.430125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:08.361 [2024-11-20 09:23:03.430136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:08.361 [2024-11-20 09:23:03.430148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:08.361 [2024-11-20 09:23:03.430224] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:08.361 [2024-11-20 09:23:03.430251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:08.361 [2024-11-20 09:23:03.430277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:08.361 [2024-11-20 09:23:03.430289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:08.361 [2024-11-20 09:23:03.430301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:08.361 [2024-11-20 09:23:03.430315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.361 [2024-11-20 09:23:03.430327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:08.361 [2024-11-20 09:23:03.430340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.105 ms 00:26:08.361 [2024-11-20 09:23:03.430352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.478828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.619 [2024-11-20 09:23:03.478983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:08.619 [2024-11-20 09:23:03.479024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.382 ms 00:26:08.619 [2024-11-20 09:23:03.479047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.479306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.619 [2024-11-20 09:23:03.479339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:08.619 [2024-11-20 09:23:03.479362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:26:08.619 [2024-11-20 09:23:03.479380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.546356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.619 [2024-11-20 09:23:03.546572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:08.619 [2024-11-20 09:23:03.546618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.730 ms 00:26:08.619 [2024-11-20 09:23:03.546644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.546902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.619 [2024-11-20 09:23:03.546941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:08.619 [2024-11-20 09:23:03.546994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:08.619 [2024-11-20 09:23:03.547020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.548253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.619 [2024-11-20 09:23:03.548313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:08.619 [2024-11-20 09:23:03.548346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:26:08.619 [2024-11-20 09:23:03.548370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.548791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.619 [2024-11-20 09:23:03.548848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:08.619 [2024-11-20 09:23:03.548878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:26:08.619 [2024-11-20 09:23:03.548916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.619 [2024-11-20 09:23:03.578574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.620 [2024-11-20 09:23:03.578741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:08.620 [2024-11-20 09:23:03.578791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.596 ms 00:26:08.620 [2024-11-20 09:23:03.578814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.620 [2024-11-20 09:23:03.605616] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:08.620 [2024-11-20 09:23:03.605803] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:08.620 [2024-11-20 09:23:03.605846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.620 [2024-11-20 09:23:03.605870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:08.620 [2024-11-20 09:23:03.605900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.576 ms 00:26:08.620 [2024-11-20 09:23:03.605924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.620 [2024-11-20 09:23:03.643906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.620 [2024-11-20 09:23:03.644633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:08.620 [2024-11-20 09:23:03.644758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.776 ms 00:26:08.620 [2024-11-20 09:23:03.644776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.620 [2024-11-20 09:23:03.663490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.620 [2024-11-20 09:23:03.663972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:08.620 [2024-11-20 09:23:03.664006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.505 ms 00:26:08.620 [2024-11-20 09:23:03.664019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.620 [2024-11-20 09:23:03.681666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.620 [2024-11-20 09:23:03.681757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:08.620 [2024-11-20 09:23:03.681779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.511 ms 00:26:08.620 [2024-11-20 09:23:03.681791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.620 [2024-11-20 09:23:03.682981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.620 [2024-11-20 09:23:03.683147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:08.620 [2024-11-20 09:23:03.683181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:26:08.620 [2024-11-20 09:23:03.683194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.770459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.770993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:08.878 [2024-11-20 09:23:03.771032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.206 ms 00:26:08.878 [2024-11-20 09:23:03.771080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.788452] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:08.878 [2024-11-20 09:23:03.792706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.792761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:08.878 [2024-11-20 09:23:03.792782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.492 ms 00:26:08.878 [2024-11-20 09:23:03.792795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.792953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.792974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:08.878 [2024-11-20 09:23:03.792989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:08.878 [2024-11-20 09:23:03.793000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.793143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.793163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:08.878 [2024-11-20 09:23:03.793178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:08.878 [2024-11-20 09:23:03.793189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.793222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.793236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:08.878 [2024-11-20 09:23:03.793249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:08.878 [2024-11-20 09:23:03.793261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.793307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:08.878 [2024-11-20 09:23:03.793324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.793341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:08.878 [2024-11-20 09:23:03.793354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:08.878 [2024-11-20 09:23:03.793366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.878 [2024-11-20 09:23:03.827879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.878 [2024-11-20 09:23:03.827974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:08.878 [2024-11-20 09:23:03.827997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.481 ms 00:26:08.879 [2024-11-20 09:23:03.828010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.879 [2024-11-20 09:23:03.828210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.879 [2024-11-20 09:23:03.828252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:08.879 [2024-11-20 09:23:03.828268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:08.879 [2024-11-20 09:23:03.828282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.879 [2024-11-20 09:23:03.830079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 447.152 ms, result 0 00:26:09.812  [2024-11-20T09:23:05.866Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T09:23:07.242Z] Copying: 49/1024 [MB] (23 MBps) [2024-11-20T09:23:08.177Z] Copying: 76/1024 [MB] (26 MBps) [2024-11-20T09:23:09.111Z] Copying: 101/1024 [MB] (25 MBps) [2024-11-20T09:23:10.048Z] Copying: 127/1024 [MB] (25 MBps) [2024-11-20T09:23:10.983Z] Copying: 152/1024 [MB] (25 MBps) [2024-11-20T09:23:11.916Z] Copying: 179/1024 [MB] (26 MBps) [2024-11-20T09:23:12.851Z] Copying: 206/1024 [MB] (27 MBps) [2024-11-20T09:23:14.288Z] Copying: 232/1024 [MB] (25 MBps) [2024-11-20T09:23:14.855Z] Copying: 259/1024 [MB] (26 MBps) [2024-11-20T09:23:16.228Z] Copying: 283/1024 [MB] (24 MBps) [2024-11-20T09:23:17.162Z] Copying: 306/1024 [MB] (22 MBps) [2024-11-20T09:23:18.097Z] Copying: 329/1024 [MB] (22 MBps) [2024-11-20T09:23:19.033Z] Copying: 352/1024 [MB] (23 MBps) [2024-11-20T09:23:19.968Z] Copying: 377/1024 [MB] (25 MBps) [2024-11-20T09:23:20.902Z] Copying: 402/1024 [MB] (24 MBps) [2024-11-20T09:23:22.276Z] Copying: 427/1024 [MB] (25 MBps) [2024-11-20T09:23:23.209Z] Copying: 451/1024 [MB] (23 MBps) [2024-11-20T09:23:24.141Z] Copying: 477/1024 [MB] (26 MBps) [2024-11-20T09:23:25.076Z] Copying: 500/1024 [MB] (23 MBps) [2024-11-20T09:23:26.008Z] Copying: 526/1024 [MB] (25 MBps) [2024-11-20T09:23:26.939Z] Copying: 551/1024 [MB] (24 MBps) [2024-11-20T09:23:27.873Z] Copying: 576/1024 [MB] (25 MBps) [2024-11-20T09:23:29.247Z] Copying: 603/1024 [MB] (26 MBps) [2024-11-20T09:23:30.177Z] Copying: 628/1024 [MB] (25 MBps) [2024-11-20T09:23:31.129Z] Copying: 656/1024 [MB] (27 MBps) [2024-11-20T09:23:32.066Z] Copying: 682/1024 [MB] (26 MBps) [2024-11-20T09:23:33.001Z] Copying: 708/1024 [MB] (25 MBps) [2024-11-20T09:23:33.952Z] Copying: 733/1024 [MB] (25 MBps) [2024-11-20T09:23:34.886Z] Copying: 757/1024 [MB] (24 MBps) [2024-11-20T09:23:36.260Z] Copying: 780/1024 [MB] (22 MBps) [2024-11-20T09:23:37.194Z] Copying: 801/1024 [MB] (20 MBps) [2024-11-20T09:23:38.129Z] Copying: 824/1024 [MB] (23 MBps) [2024-11-20T09:23:39.065Z] Copying: 846/1024 [MB] (21 MBps) [2024-11-20T09:23:39.998Z] Copying: 870/1024 [MB] (24 MBps) [2024-11-20T09:23:40.945Z] Copying: 894/1024 [MB] (24 MBps) [2024-11-20T09:23:41.881Z] Copying: 920/1024 [MB] (25 MBps) [2024-11-20T09:23:43.255Z] Copying: 946/1024 [MB] (26 MBps) [2024-11-20T09:23:44.198Z] Copying: 969/1024 [MB] (23 MBps) [2024-11-20T09:23:45.149Z] Copying: 995/1024 [MB] (25 MBps) [2024-11-20T09:23:45.149Z] Copying: 1021/1024 [MB] (26 MBps) [2024-11-20T09:23:45.149Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 09:23:44.912738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:44.912806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:50.029 [2024-11-20 09:23:44.912829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:50.029 [2024-11-20 09:23:44.912842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:44.912874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:50.029 [2024-11-20 09:23:44.916724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:44.916784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:50.029 [2024-11-20 09:23:44.916803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.822 ms 00:26:50.029 [2024-11-20 09:23:44.916816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:44.918570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:44.918618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:50.029 [2024-11-20 09:23:44.918635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:26:50.029 [2024-11-20 09:23:44.918665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:44.935935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:44.936402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:50.029 [2024-11-20 09:23:44.936436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.231 ms 00:26:50.029 [2024-11-20 09:23:44.936450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:44.943151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:44.943526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:50.029 [2024-11-20 09:23:44.943581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.556 ms 00:26:50.029 [2024-11-20 09:23:44.943602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:44.981028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:44.981210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:50.029 [2024-11-20 09:23:44.981244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.233 ms 00:26:50.029 [2024-11-20 09:23:44.981264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:45.002352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:45.002471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:50.029 [2024-11-20 09:23:45.002506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.944 ms 00:26:50.029 [2024-11-20 09:23:45.002526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:45.002887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:45.002919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:50.029 [2024-11-20 09:23:45.002961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:26:50.029 [2024-11-20 09:23:45.002982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:45.040941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:45.041430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:50.029 [2024-11-20 09:23:45.041476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.914 ms 00:26:50.029 [2024-11-20 09:23:45.041495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:45.079027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:45.079152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:50.029 [2024-11-20 09:23:45.079230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.402 ms 00:26:50.029 [2024-11-20 09:23:45.079250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.029 [2024-11-20 09:23:45.117020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.029 [2024-11-20 09:23:45.117132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:50.029 [2024-11-20 09:23:45.117165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.620 ms 00:26:50.029 [2024-11-20 09:23:45.117184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.288 [2024-11-20 09:23:45.154847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.288 [2024-11-20 09:23:45.154954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:50.288 [2024-11-20 09:23:45.154989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.388 ms 00:26:50.288 [2024-11-20 09:23:45.155008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.288 [2024-11-20 09:23:45.155146] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:50.288 [2024-11-20 09:23:45.155186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:50.288 [2024-11-20 09:23:45.155826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.155989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.156989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:50.289 [2024-11-20 09:23:45.157619] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:50.289 [2024-11-20 09:23:45.157673] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 26796cd3-a23c-498c-9a73-5c5d333b72c6 00:26:50.289 [2024-11-20 09:23:45.157698] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:50.289 [2024-11-20 09:23:45.157727] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:50.289 [2024-11-20 09:23:45.157747] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:50.289 [2024-11-20 09:23:45.157781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:50.289 [2024-11-20 09:23:45.157801] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:50.289 [2024-11-20 09:23:45.157822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:50.289 [2024-11-20 09:23:45.157841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:50.289 [2024-11-20 09:23:45.157880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:50.289 [2024-11-20 09:23:45.157901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:50.289 [2024-11-20 09:23:45.157926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.289 [2024-11-20 09:23:45.157949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:50.289 [2024-11-20 09:23:45.157974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.782 ms 00:26:50.289 [2024-11-20 09:23:45.157995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.289 [2024-11-20 09:23:45.178969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.289 [2024-11-20 09:23:45.179321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:50.289 [2024-11-20 09:23:45.179470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.829 ms 00:26:50.289 [2024-11-20 09:23:45.179541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.289 [2024-11-20 09:23:45.180321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:50.289 [2024-11-20 09:23:45.180467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:50.289 [2024-11-20 09:23:45.180625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:26:50.289 [2024-11-20 09:23:45.180835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.290 [2024-11-20 09:23:45.226739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.290 [2024-11-20 09:23:45.227080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:50.290 [2024-11-20 09:23:45.227249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.290 [2024-11-20 09:23:45.227321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.290 [2024-11-20 09:23:45.227547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.290 [2024-11-20 09:23:45.227635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:50.290 [2024-11-20 09:23:45.227856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.290 [2024-11-20 09:23:45.228026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.290 [2024-11-20 09:23:45.228425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.290 [2024-11-20 09:23:45.228573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:50.290 [2024-11-20 09:23:45.228781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.290 [2024-11-20 09:23:45.228940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.290 [2024-11-20 09:23:45.229114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.290 [2024-11-20 09:23:45.229262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:50.290 [2024-11-20 09:23:45.229406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.290 [2024-11-20 09:23:45.229552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.290 [2024-11-20 09:23:45.346269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.290 [2024-11-20 09:23:45.346392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:50.290 [2024-11-20 09:23:45.346427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.290 [2024-11-20 09:23:45.346446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.443637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.443780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:50.548 [2024-11-20 09:23:45.443812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.443832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.444024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.444059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:50.548 [2024-11-20 09:23:45.444079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.444096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.444169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.444204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:50.548 [2024-11-20 09:23:45.444228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.444247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.444440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.444480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:50.548 [2024-11-20 09:23:45.444502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.444521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.444600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.444627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:50.548 [2024-11-20 09:23:45.444679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.444705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.444777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.444810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:50.548 [2024-11-20 09:23:45.444840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.444859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.444942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:50.548 [2024-11-20 09:23:45.444969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:50.548 [2024-11-20 09:23:45.444989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:50.548 [2024-11-20 09:23:45.445008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:50.548 [2024-11-20 09:23:45.445230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.440 ms, result 0 00:26:51.926 00:26:51.926 00:26:51.926 09:23:46 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:51.926 [2024-11-20 09:23:46.813192] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:26:51.926 [2024-11-20 09:23:46.813383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79802 ] 00:26:51.926 [2024-11-20 09:23:46.989732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.184 [2024-11-20 09:23:47.158106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.443 [2024-11-20 09:23:47.518113] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.443 [2024-11-20 09:23:47.518229] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.702 [2024-11-20 09:23:47.685688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.702 [2024-11-20 09:23:47.685774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:52.702 [2024-11-20 09:23:47.685805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:52.702 [2024-11-20 09:23:47.685819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.702 [2024-11-20 09:23:47.685907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.702 [2024-11-20 09:23:47.685926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:52.702 [2024-11-20 09:23:47.685944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:52.703 [2024-11-20 09:23:47.685955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.685987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:52.703 [2024-11-20 09:23:47.687076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:52.703 [2024-11-20 09:23:47.687167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.687184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:52.703 [2024-11-20 09:23:47.687198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:26:52.703 [2024-11-20 09:23:47.687209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.689285] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:52.703 [2024-11-20 09:23:47.707679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.707796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:52.703 [2024-11-20 09:23:47.707819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.390 ms 00:26:52.703 [2024-11-20 09:23:47.707832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.708050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.708070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:52.703 [2024-11-20 09:23:47.708084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:52.703 [2024-11-20 09:23:47.708096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.718501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.718965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:52.703 [2024-11-20 09:23:47.719001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.250 ms 00:26:52.703 [2024-11-20 09:23:47.719015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.719162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.719181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:52.703 [2024-11-20 09:23:47.719194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:52.703 [2024-11-20 09:23:47.719206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.719318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.719336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:52.703 [2024-11-20 09:23:47.719349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:52.703 [2024-11-20 09:23:47.719361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.719399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:52.703 [2024-11-20 09:23:47.724781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.724855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:52.703 [2024-11-20 09:23:47.724879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.392 ms 00:26:52.703 [2024-11-20 09:23:47.724897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.724968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.724994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:52.703 [2024-11-20 09:23:47.725009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:52.703 [2024-11-20 09:23:47.725021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.725097] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:52.703 [2024-11-20 09:23:47.725130] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:52.703 [2024-11-20 09:23:47.725176] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:52.703 [2024-11-20 09:23:47.725202] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:52.703 [2024-11-20 09:23:47.725315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:52.703 [2024-11-20 09:23:47.725331] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:52.703 [2024-11-20 09:23:47.725347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:52.703 [2024-11-20 09:23:47.725363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:52.703 [2024-11-20 09:23:47.725377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:52.703 [2024-11-20 09:23:47.725389] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:52.703 [2024-11-20 09:23:47.725401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:52.703 [2024-11-20 09:23:47.725412] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:52.703 [2024-11-20 09:23:47.725423] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:52.703 [2024-11-20 09:23:47.725441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.725452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:52.703 [2024-11-20 09:23:47.725465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:26:52.703 [2024-11-20 09:23:47.725476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.725573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.703 [2024-11-20 09:23:47.725589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:52.703 [2024-11-20 09:23:47.725603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:52.703 [2024-11-20 09:23:47.725614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.703 [2024-11-20 09:23:47.725766] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:52.703 [2024-11-20 09:23:47.725794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:52.703 [2024-11-20 09:23:47.725816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.703 [2024-11-20 09:23:47.725828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.725840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:52.703 [2024-11-20 09:23:47.725850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.725861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:52.703 [2024-11-20 09:23:47.725873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:52.703 [2024-11-20 09:23:47.725884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:52.703 [2024-11-20 09:23:47.725894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.703 [2024-11-20 09:23:47.725904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:52.703 [2024-11-20 09:23:47.725914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:52.703 [2024-11-20 09:23:47.725924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.703 [2024-11-20 09:23:47.725934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:52.703 [2024-11-20 09:23:47.725945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:52.703 [2024-11-20 09:23:47.725967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.725978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:52.703 [2024-11-20 09:23:47.725989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:52.703 [2024-11-20 09:23:47.726001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:52.703 [2024-11-20 09:23:47.726024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.703 [2024-11-20 09:23:47.726045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:52.703 [2024-11-20 09:23:47.726055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.703 [2024-11-20 09:23:47.726077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:52.703 [2024-11-20 09:23:47.726087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.703 [2024-11-20 09:23:47.726108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:52.703 [2024-11-20 09:23:47.726119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.703 [2024-11-20 09:23:47.726140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:52.703 [2024-11-20 09:23:47.726150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.703 [2024-11-20 09:23:47.726170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:52.703 [2024-11-20 09:23:47.726194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:52.703 [2024-11-20 09:23:47.726213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.703 [2024-11-20 09:23:47.726224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:52.703 [2024-11-20 09:23:47.726236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:52.703 [2024-11-20 09:23:47.726246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:52.703 [2024-11-20 09:23:47.726267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:52.703 [2024-11-20 09:23:47.726277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726288] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:52.703 [2024-11-20 09:23:47.726300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:52.703 [2024-11-20 09:23:47.726312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.703 [2024-11-20 09:23:47.726322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.703 [2024-11-20 09:23:47.726334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:52.704 [2024-11-20 09:23:47.726345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:52.704 [2024-11-20 09:23:47.726355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:52.704 [2024-11-20 09:23:47.726367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:52.704 [2024-11-20 09:23:47.726377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:52.704 [2024-11-20 09:23:47.726387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:52.704 [2024-11-20 09:23:47.726399] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:52.704 [2024-11-20 09:23:47.726413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:52.704 [2024-11-20 09:23:47.726438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:52.704 [2024-11-20 09:23:47.726449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:52.704 [2024-11-20 09:23:47.726461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:52.704 [2024-11-20 09:23:47.726472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:52.704 [2024-11-20 09:23:47.726488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:52.704 [2024-11-20 09:23:47.726499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:52.704 [2024-11-20 09:23:47.726510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:52.704 [2024-11-20 09:23:47.726532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:52.704 [2024-11-20 09:23:47.726543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:52.704 [2024-11-20 09:23:47.726598] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:52.704 [2024-11-20 09:23:47.726617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:52.704 [2024-11-20 09:23:47.726642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:52.704 [2024-11-20 09:23:47.727148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:52.704 [2024-11-20 09:23:47.727289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:52.704 [2024-11-20 09:23:47.727423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.704 [2024-11-20 09:23:47.727474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:52.704 [2024-11-20 09:23:47.727680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:26:52.704 [2024-11-20 09:23:47.727812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.704 [2024-11-20 09:23:47.768661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.704 [2024-11-20 09:23:47.769082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:52.704 [2024-11-20 09:23:47.769214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.722 ms 00:26:52.704 [2024-11-20 09:23:47.769265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.704 [2024-11-20 09:23:47.769510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.704 [2024-11-20 09:23:47.769559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:52.704 [2024-11-20 09:23:47.769677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:26:52.704 [2024-11-20 09:23:47.769812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.962 [2024-11-20 09:23:47.828085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.962 [2024-11-20 09:23:47.828416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:52.963 [2024-11-20 09:23:47.828452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.118 ms 00:26:52.963 [2024-11-20 09:23:47.828465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.828553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.828572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:52.963 [2024-11-20 09:23:47.828594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:52.963 [2024-11-20 09:23:47.828615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.829356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.829387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:52.963 [2024-11-20 09:23:47.829401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:26:52.963 [2024-11-20 09:23:47.829412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.829595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.829615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:52.963 [2024-11-20 09:23:47.829629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:26:52.963 [2024-11-20 09:23:47.829665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.849619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.849718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:52.963 [2024-11-20 09:23:47.849746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.918 ms 00:26:52.963 [2024-11-20 09:23:47.849759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.868076] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:52.963 [2024-11-20 09:23:47.868180] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:52.963 [2024-11-20 09:23:47.868210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.868224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:52.963 [2024-11-20 09:23:47.868242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.255 ms 00:26:52.963 [2024-11-20 09:23:47.868254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.900361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.900509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:52.963 [2024-11-20 09:23:47.900534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.003 ms 00:26:52.963 [2024-11-20 09:23:47.900547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.919566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.919986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:52.963 [2024-11-20 09:23:47.920021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.849 ms 00:26:52.963 [2024-11-20 09:23:47.920035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.938726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.939141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:52.963 [2024-11-20 09:23:47.939175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.576 ms 00:26:52.963 [2024-11-20 09:23:47.939188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:47.940299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:47.940335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:52.963 [2024-11-20 09:23:47.940351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:26:52.963 [2024-11-20 09:23:47.940378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:48.024615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:48.024728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:52.963 [2024-11-20 09:23:48.024772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.206 ms 00:26:52.963 [2024-11-20 09:23:48.024786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:48.042330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:52.963 [2024-11-20 09:23:48.047364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:48.047461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:52.963 [2024-11-20 09:23:48.047485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.459 ms 00:26:52.963 [2024-11-20 09:23:48.047502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:48.047752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:48.047776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:52.963 [2024-11-20 09:23:48.047799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:52.963 [2024-11-20 09:23:48.047817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:48.047968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:48.047987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:52.963 [2024-11-20 09:23:48.048001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:52.963 [2024-11-20 09:23:48.048013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:48.048049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:48.048065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:52.963 [2024-11-20 09:23:48.048079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:52.963 [2024-11-20 09:23:48.048092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.963 [2024-11-20 09:23:48.048142] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:52.963 [2024-11-20 09:23:48.048163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.963 [2024-11-20 09:23:48.048177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:52.963 [2024-11-20 09:23:48.048189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:52.963 [2024-11-20 09:23:48.048201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.221 [2024-11-20 09:23:48.084580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.221 [2024-11-20 09:23:48.084706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:53.221 [2024-11-20 09:23:48.084730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.346 ms 00:26:53.221 [2024-11-20 09:23:48.084759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.221 [2024-11-20 09:23:48.084929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.221 [2024-11-20 09:23:48.084948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:53.221 [2024-11-20 09:23:48.084963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:53.221 [2024-11-20 09:23:48.084986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.221 [2024-11-20 09:23:48.086562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.300 ms, result 0 00:26:54.206  [2024-11-20T09:23:50.712Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T09:23:51.646Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-20T09:23:52.580Z] Copying: 76/1024 [MB] (24 MBps) [2024-11-20T09:23:53.515Z] Copying: 101/1024 [MB] (24 MBps) [2024-11-20T09:23:54.449Z] Copying: 127/1024 [MB] (25 MBps) [2024-11-20T09:23:55.383Z] Copying: 150/1024 [MB] (23 MBps) [2024-11-20T09:23:56.317Z] Copying: 175/1024 [MB] (24 MBps) [2024-11-20T09:23:57.692Z] Copying: 200/1024 [MB] (25 MBps) [2024-11-20T09:23:58.627Z] Copying: 227/1024 [MB] (26 MBps) [2024-11-20T09:23:59.634Z] Copying: 252/1024 [MB] (25 MBps) [2024-11-20T09:24:00.569Z] Copying: 275/1024 [MB] (23 MBps) [2024-11-20T09:24:01.503Z] Copying: 299/1024 [MB] (23 MBps) [2024-11-20T09:24:02.453Z] Copying: 323/1024 [MB] (24 MBps) [2024-11-20T09:24:03.389Z] Copying: 349/1024 [MB] (26 MBps) [2024-11-20T09:24:04.326Z] Copying: 375/1024 [MB] (26 MBps) [2024-11-20T09:24:05.699Z] Copying: 401/1024 [MB] (25 MBps) [2024-11-20T09:24:06.636Z] Copying: 426/1024 [MB] (24 MBps) [2024-11-20T09:24:07.572Z] Copying: 451/1024 [MB] (25 MBps) [2024-11-20T09:24:08.507Z] Copying: 476/1024 [MB] (25 MBps) [2024-11-20T09:24:09.443Z] Copying: 503/1024 [MB] (26 MBps) [2024-11-20T09:24:10.378Z] Copying: 529/1024 [MB] (26 MBps) [2024-11-20T09:24:11.314Z] Copying: 554/1024 [MB] (25 MBps) [2024-11-20T09:24:12.695Z] Copying: 581/1024 [MB] (26 MBps) [2024-11-20T09:24:13.632Z] Copying: 607/1024 [MB] (26 MBps) [2024-11-20T09:24:14.568Z] Copying: 632/1024 [MB] (25 MBps) [2024-11-20T09:24:15.505Z] Copying: 655/1024 [MB] (23 MBps) [2024-11-20T09:24:16.466Z] Copying: 680/1024 [MB] (24 MBps) [2024-11-20T09:24:17.428Z] Copying: 704/1024 [MB] (24 MBps) [2024-11-20T09:24:18.363Z] Copying: 729/1024 [MB] (24 MBps) [2024-11-20T09:24:19.738Z] Copying: 753/1024 [MB] (24 MBps) [2024-11-20T09:24:20.672Z] Copying: 777/1024 [MB] (23 MBps) [2024-11-20T09:24:21.683Z] Copying: 801/1024 [MB] (24 MBps) [2024-11-20T09:24:22.617Z] Copying: 824/1024 [MB] (23 MBps) [2024-11-20T09:24:23.553Z] Copying: 848/1024 [MB] (24 MBps) [2024-11-20T09:24:24.487Z] Copying: 872/1024 [MB] (24 MBps) [2024-11-20T09:24:25.422Z] Copying: 896/1024 [MB] (23 MBps) [2024-11-20T09:24:26.368Z] Copying: 920/1024 [MB] (24 MBps) [2024-11-20T09:24:27.311Z] Copying: 945/1024 [MB] (24 MBps) [2024-11-20T09:24:28.689Z] Copying: 968/1024 [MB] (23 MBps) [2024-11-20T09:24:29.624Z] Copying: 991/1024 [MB] (23 MBps) [2024-11-20T09:24:29.881Z] Copying: 1015/1024 [MB] (24 MBps) [2024-11-20T09:24:30.140Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 09:24:29.961080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.020 [2024-11-20 09:24:29.961358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:35.020 [2024-11-20 09:24:29.961669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:35.020 [2024-11-20 09:24:29.961803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.020 [2024-11-20 09:24:29.961885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:35.020 [2024-11-20 09:24:29.966009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.020 [2024-11-20 09:24:29.966048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:35.020 [2024-11-20 09:24:29.966088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.891 ms 00:27:35.020 [2024-11-20 09:24:29.966100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.020 [2024-11-20 09:24:29.966380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.020 [2024-11-20 09:24:29.966405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:35.020 [2024-11-20 09:24:29.966419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:27:35.021 [2024-11-20 09:24:29.966431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:29.970456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:29.970489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:35.021 [2024-11-20 09:24:29.970505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:27:35.021 [2024-11-20 09:24:29.970517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:29.978104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:29.978156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:35.021 [2024-11-20 09:24:29.978186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.547 ms 00:27:35.021 [2024-11-20 09:24:29.978205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:30.010351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:30.010403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:35.021 [2024-11-20 09:24:30.010422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.049 ms 00:27:35.021 [2024-11-20 09:24:30.010433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:30.028254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:30.028299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:35.021 [2024-11-20 09:24:30.028332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.774 ms 00:27:35.021 [2024-11-20 09:24:30.028343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:30.028520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:30.028548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:35.021 [2024-11-20 09:24:30.028562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:27:35.021 [2024-11-20 09:24:30.028573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:30.059108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:30.059178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:35.021 [2024-11-20 09:24:30.059226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.511 ms 00:27:35.021 [2024-11-20 09:24:30.059237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:30.090445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:30.090521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:35.021 [2024-11-20 09:24:30.090588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.161 ms 00:27:35.021 [2024-11-20 09:24:30.090599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.021 [2024-11-20 09:24:30.121653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.021 [2024-11-20 09:24:30.121711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:35.021 [2024-11-20 09:24:30.121730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.011 ms 00:27:35.021 [2024-11-20 09:24:30.121741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.282 [2024-11-20 09:24:30.150839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.282 [2024-11-20 09:24:30.151054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:35.282 [2024-11-20 09:24:30.151083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.990 ms 00:27:35.282 [2024-11-20 09:24:30.151095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.282 [2024-11-20 09:24:30.151155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:35.282 [2024-11-20 09:24:30.151180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.151992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:35.282 [2024-11-20 09:24:30.152070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:35.283 [2024-11-20 09:24:30.152571] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:35.283 [2024-11-20 09:24:30.152587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 26796cd3-a23c-498c-9a73-5c5d333b72c6 00:27:35.283 [2024-11-20 09:24:30.152599] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:35.283 [2024-11-20 09:24:30.152610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:35.283 [2024-11-20 09:24:30.152621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:35.283 [2024-11-20 09:24:30.152633] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:35.283 [2024-11-20 09:24:30.152644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:35.283 [2024-11-20 09:24:30.152655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:35.283 [2024-11-20 09:24:30.152678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:35.283 [2024-11-20 09:24:30.152689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:35.283 [2024-11-20 09:24:30.152700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:35.283 [2024-11-20 09:24:30.152723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.283 [2024-11-20 09:24:30.152735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:35.283 [2024-11-20 09:24:30.152748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:27:35.283 [2024-11-20 09:24:30.152759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.168982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.283 [2024-11-20 09:24:30.169030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:35.283 [2024-11-20 09:24:30.169061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.176 ms 00:27:35.283 [2024-11-20 09:24:30.169073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.169540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:35.283 [2024-11-20 09:24:30.169559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:35.283 [2024-11-20 09:24:30.169572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:27:35.283 [2024-11-20 09:24:30.169589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.213194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.283 [2024-11-20 09:24:30.213248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.283 [2024-11-20 09:24:30.213295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.283 [2024-11-20 09:24:30.213346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.213432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.283 [2024-11-20 09:24:30.213447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.283 [2024-11-20 09:24:30.213460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.283 [2024-11-20 09:24:30.213478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.213591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.283 [2024-11-20 09:24:30.213611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.283 [2024-11-20 09:24:30.213623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.283 [2024-11-20 09:24:30.213635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.213657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.283 [2024-11-20 09:24:30.213671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.283 [2024-11-20 09:24:30.213682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.283 [2024-11-20 09:24:30.213693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.283 [2024-11-20 09:24:30.320032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.283 [2024-11-20 09:24:30.320103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.283 [2024-11-20 09:24:30.320135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.283 [2024-11-20 09:24:30.320147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.406774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.406842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.542 [2024-11-20 09:24:30.406875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.406903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.407028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.542 [2024-11-20 09:24:30.407039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.407050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.407112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.542 [2024-11-20 09:24:30.407124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.407134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.407284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.542 [2024-11-20 09:24:30.407296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.407307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.407375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:35.542 [2024-11-20 09:24:30.407387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.407398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.407482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.542 [2024-11-20 09:24:30.407495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.407505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.542 [2024-11-20 09:24:30.407578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.542 [2024-11-20 09:24:30.407590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.542 [2024-11-20 09:24:30.407601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.542 [2024-11-20 09:24:30.407831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 446.688 ms, result 0 00:27:36.476 00:27:36.476 00:27:36.476 09:24:31 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:39.011 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:39.011 09:24:33 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:27:39.011 [2024-11-20 09:24:33.651311] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:27:39.011 [2024-11-20 09:24:33.651493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80262 ] 00:27:39.011 [2024-11-20 09:24:33.833341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.011 [2024-11-20 09:24:34.008886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.270 [2024-11-20 09:24:34.386771] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.270 [2024-11-20 09:24:34.386859] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.530 [2024-11-20 09:24:34.551787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.552003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:39.530 [2024-11-20 09:24:34.552046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:39.530 [2024-11-20 09:24:34.552060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.552143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.552162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.530 [2024-11-20 09:24:34.552180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:39.530 [2024-11-20 09:24:34.552192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.552224] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:39.530 [2024-11-20 09:24:34.553151] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:39.530 [2024-11-20 09:24:34.553192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.553208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.530 [2024-11-20 09:24:34.553221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:27:39.530 [2024-11-20 09:24:34.553233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.555140] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:39.530 [2024-11-20 09:24:34.572064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.572105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:39.530 [2024-11-20 09:24:34.572137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.926 ms 00:27:39.530 [2024-11-20 09:24:34.572150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.572230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.572259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:39.530 [2024-11-20 09:24:34.572272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:39.530 [2024-11-20 09:24:34.572284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.581483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.581529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.530 [2024-11-20 09:24:34.581561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.106 ms 00:27:39.530 [2024-11-20 09:24:34.581574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.581720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.581741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.530 [2024-11-20 09:24:34.581754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:39.530 [2024-11-20 09:24:34.581766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.581828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.581846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:39.530 [2024-11-20 09:24:34.581859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:39.530 [2024-11-20 09:24:34.581871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.581908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:39.530 [2024-11-20 09:24:34.586985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.587024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.530 [2024-11-20 09:24:34.587040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.088 ms 00:27:39.530 [2024-11-20 09:24:34.587057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.587098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.530 [2024-11-20 09:24:34.587114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:39.530 [2024-11-20 09:24:34.587127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:39.530 [2024-11-20 09:24:34.587148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.530 [2024-11-20 09:24:34.587224] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:39.530 [2024-11-20 09:24:34.587259] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:39.530 [2024-11-20 09:24:34.587327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:39.530 [2024-11-20 09:24:34.587354] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:39.531 [2024-11-20 09:24:34.587473] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:39.531 [2024-11-20 09:24:34.587489] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:39.531 [2024-11-20 09:24:34.587505] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:39.531 [2024-11-20 09:24:34.587521] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:39.531 [2024-11-20 09:24:34.587535] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:39.531 [2024-11-20 09:24:34.587548] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:39.531 [2024-11-20 09:24:34.587560] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:39.531 [2024-11-20 09:24:34.587581] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:39.531 [2024-11-20 09:24:34.587602] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:39.531 [2024-11-20 09:24:34.587630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.531 [2024-11-20 09:24:34.587642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:39.531 [2024-11-20 09:24:34.587687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:27:39.531 [2024-11-20 09:24:34.587710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.531 [2024-11-20 09:24:34.587810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.531 [2024-11-20 09:24:34.587826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:39.531 [2024-11-20 09:24:34.587839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:39.531 [2024-11-20 09:24:34.587851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.531 [2024-11-20 09:24:34.587979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:39.531 [2024-11-20 09:24:34.588004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:39.531 [2024-11-20 09:24:34.588017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:39.531 [2024-11-20 09:24:34.588052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:39.531 [2024-11-20 09:24:34.588084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.531 [2024-11-20 09:24:34.588105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:39.531 [2024-11-20 09:24:34.588117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:39.531 [2024-11-20 09:24:34.588127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.531 [2024-11-20 09:24:34.588138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:39.531 [2024-11-20 09:24:34.588149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:39.531 [2024-11-20 09:24:34.588170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:39.531 [2024-11-20 09:24:34.588193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:39.531 [2024-11-20 09:24:34.588227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:39.531 [2024-11-20 09:24:34.588267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:39.531 [2024-11-20 09:24:34.588299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:39.531 [2024-11-20 09:24:34.588330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:39.531 [2024-11-20 09:24:34.588363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.531 [2024-11-20 09:24:34.588383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:39.531 [2024-11-20 09:24:34.588394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:39.531 [2024-11-20 09:24:34.588405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.531 [2024-11-20 09:24:34.588415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:39.531 [2024-11-20 09:24:34.588426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:39.531 [2024-11-20 09:24:34.588437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:39.531 [2024-11-20 09:24:34.588458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:39.531 [2024-11-20 09:24:34.588469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588489] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:39.531 [2024-11-20 09:24:34.588501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:39.531 [2024-11-20 09:24:34.588513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.531 [2024-11-20 09:24:34.588536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:39.531 [2024-11-20 09:24:34.588548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:39.531 [2024-11-20 09:24:34.588559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:39.531 [2024-11-20 09:24:34.588571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:39.531 [2024-11-20 09:24:34.588582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:39.531 [2024-11-20 09:24:34.588603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:39.531 [2024-11-20 09:24:34.588615] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:39.531 [2024-11-20 09:24:34.588629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:39.531 [2024-11-20 09:24:34.588671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:39.531 [2024-11-20 09:24:34.588683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:39.531 [2024-11-20 09:24:34.588695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:39.531 [2024-11-20 09:24:34.588715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:39.531 [2024-11-20 09:24:34.588727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:39.531 [2024-11-20 09:24:34.588738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:39.531 [2024-11-20 09:24:34.588750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:39.531 [2024-11-20 09:24:34.588762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:39.531 [2024-11-20 09:24:34.588773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:39.531 [2024-11-20 09:24:34.588831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:39.531 [2024-11-20 09:24:34.588851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:39.531 [2024-11-20 09:24:34.588876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:39.531 [2024-11-20 09:24:34.588887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:39.531 [2024-11-20 09:24:34.588899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:39.531 [2024-11-20 09:24:34.588911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.531 [2024-11-20 09:24:34.588927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:39.531 [2024-11-20 09:24:34.588939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:27:39.531 [2024-11-20 09:24:34.588951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.531 [2024-11-20 09:24:34.630892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.531 [2024-11-20 09:24:34.630962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.532 [2024-11-20 09:24:34.630998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.867 ms 00:27:39.532 [2024-11-20 09:24:34.631024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.532 [2024-11-20 09:24:34.631159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.532 [2024-11-20 09:24:34.631177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:39.532 [2024-11-20 09:24:34.631203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:39.532 [2024-11-20 09:24:34.631226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.687215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.687277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:39.792 [2024-11-20 09:24:34.687312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.886 ms 00:27:39.792 [2024-11-20 09:24:34.687324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.687392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.687410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:39.792 [2024-11-20 09:24:34.687423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:39.792 [2024-11-20 09:24:34.687441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.688337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.688527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:39.792 [2024-11-20 09:24:34.688702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:27:39.792 [2024-11-20 09:24:34.688758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.688976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.689049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:39.792 [2024-11-20 09:24:34.689069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:27:39.792 [2024-11-20 09:24:34.689090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.709112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.709161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:39.792 [2024-11-20 09:24:34.709186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.989 ms 00:27:39.792 [2024-11-20 09:24:34.709199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.726118] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:39.792 [2024-11-20 09:24:34.726312] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:39.792 [2024-11-20 09:24:34.726337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.726350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:39.792 [2024-11-20 09:24:34.726364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.003 ms 00:27:39.792 [2024-11-20 09:24:34.726375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.756116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.756192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:39.792 [2024-11-20 09:24:34.756213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.671 ms 00:27:39.792 [2024-11-20 09:24:34.756226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.773015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.773108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:39.792 [2024-11-20 09:24:34.773142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.713 ms 00:27:39.792 [2024-11-20 09:24:34.773153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.788794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.788838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:39.792 [2024-11-20 09:24:34.788855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.593 ms 00:27:39.792 [2024-11-20 09:24:34.788867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.789821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.789859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:39.792 [2024-11-20 09:24:34.789876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:27:39.792 [2024-11-20 09:24:34.789894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.865170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.865260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:39.792 [2024-11-20 09:24:34.865305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.243 ms 00:27:39.792 [2024-11-20 09:24:34.865332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.877822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:39.792 [2024-11-20 09:24:34.881004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.881038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:39.792 [2024-11-20 09:24:34.881070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.602 ms 00:27:39.792 [2024-11-20 09:24:34.881082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.881189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.881209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:39.792 [2024-11-20 09:24:34.881223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:39.792 [2024-11-20 09:24:34.881238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.881333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.881352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:39.792 [2024-11-20 09:24:34.881364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:39.792 [2024-11-20 09:24:34.881375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.881406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.881439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:39.792 [2024-11-20 09:24:34.881452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:39.792 [2024-11-20 09:24:34.881463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.792 [2024-11-20 09:24:34.881508] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:39.792 [2024-11-20 09:24:34.881529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.792 [2024-11-20 09:24:34.881541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:39.792 [2024-11-20 09:24:34.881554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:39.792 [2024-11-20 09:24:34.881566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.052 [2024-11-20 09:24:34.912282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.052 [2024-11-20 09:24:34.912324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:40.052 [2024-11-20 09:24:34.912357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.675 ms 00:27:40.052 [2024-11-20 09:24:34.912376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.052 [2024-11-20 09:24:34.912484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:40.052 [2024-11-20 09:24:34.912503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:40.052 [2024-11-20 09:24:34.912515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:40.052 [2024-11-20 09:24:34.912526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:40.052 [2024-11-20 09:24:34.914107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.651 ms, result 0 00:27:41.037  [2024-11-20T09:24:37.092Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T09:24:38.029Z] Copying: 50/1024 [MB] (24 MBps) [2024-11-20T09:24:38.966Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-20T09:24:40.341Z] Copying: 100/1024 [MB] (24 MBps) [2024-11-20T09:24:41.288Z] Copying: 126/1024 [MB] (25 MBps) [2024-11-20T09:24:42.239Z] Copying: 150/1024 [MB] (24 MBps) [2024-11-20T09:24:43.174Z] Copying: 176/1024 [MB] (25 MBps) [2024-11-20T09:24:44.111Z] Copying: 202/1024 [MB] (25 MBps) [2024-11-20T09:24:45.045Z] Copying: 225/1024 [MB] (23 MBps) [2024-11-20T09:24:45.980Z] Copying: 250/1024 [MB] (24 MBps) [2024-11-20T09:24:46.943Z] Copying: 274/1024 [MB] (24 MBps) [2024-11-20T09:24:48.320Z] Copying: 299/1024 [MB] (25 MBps) [2024-11-20T09:24:49.254Z] Copying: 325/1024 [MB] (25 MBps) [2024-11-20T09:24:50.188Z] Copying: 351/1024 [MB] (25 MBps) [2024-11-20T09:24:51.122Z] Copying: 377/1024 [MB] (26 MBps) [2024-11-20T09:24:52.055Z] Copying: 402/1024 [MB] (25 MBps) [2024-11-20T09:24:52.991Z] Copying: 428/1024 [MB] (25 MBps) [2024-11-20T09:24:54.366Z] Copying: 453/1024 [MB] (25 MBps) [2024-11-20T09:24:54.933Z] Copying: 478/1024 [MB] (24 MBps) [2024-11-20T09:24:56.308Z] Copying: 502/1024 [MB] (24 MBps) [2024-11-20T09:24:57.243Z] Copying: 527/1024 [MB] (24 MBps) [2024-11-20T09:24:58.177Z] Copying: 553/1024 [MB] (25 MBps) [2024-11-20T09:24:59.112Z] Copying: 578/1024 [MB] (25 MBps) [2024-11-20T09:25:00.048Z] Copying: 604/1024 [MB] (25 MBps) [2024-11-20T09:25:00.984Z] Copying: 628/1024 [MB] (23 MBps) [2024-11-20T09:25:02.357Z] Copying: 652/1024 [MB] (24 MBps) [2024-11-20T09:25:03.291Z] Copying: 677/1024 [MB] (25 MBps) [2024-11-20T09:25:04.224Z] Copying: 704/1024 [MB] (26 MBps) [2024-11-20T09:25:05.158Z] Copying: 730/1024 [MB] (25 MBps) [2024-11-20T09:25:06.095Z] Copying: 754/1024 [MB] (23 MBps) [2024-11-20T09:25:07.030Z] Copying: 778/1024 [MB] (24 MBps) [2024-11-20T09:25:07.966Z] Copying: 805/1024 [MB] (26 MBps) [2024-11-20T09:25:09.340Z] Copying: 830/1024 [MB] (25 MBps) [2024-11-20T09:25:10.274Z] Copying: 856/1024 [MB] (26 MBps) [2024-11-20T09:25:11.208Z] Copying: 882/1024 [MB] (25 MBps) [2024-11-20T09:25:12.150Z] Copying: 907/1024 [MB] (25 MBps) [2024-11-20T09:25:13.083Z] Copying: 932/1024 [MB] (25 MBps) [2024-11-20T09:25:14.015Z] Copying: 957/1024 [MB] (25 MBps) [2024-11-20T09:25:14.948Z] Copying: 983/1024 [MB] (25 MBps) [2024-11-20T09:25:16.328Z] Copying: 1008/1024 [MB] (25 MBps) [2024-11-20T09:25:16.895Z] Copying: 1023/1024 [MB] (14 MBps) [2024-11-20T09:25:16.895Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 09:25:16.627725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.775 [2024-11-20 09:25:16.628113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:21.775 [2024-11-20 09:25:16.628245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:21.775 [2024-11-20 09:25:16.628407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.775 [2024-11-20 09:25:16.632185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.775 [2024-11-20 09:25:16.636955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.775 [2024-11-20 09:25:16.636998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:21.775 [2024-11-20 09:25:16.637017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.565 ms 00:28:21.775 [2024-11-20 09:25:16.637030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.775 [2024-11-20 09:25:16.649709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.775 [2024-11-20 09:25:16.649760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:21.775 [2024-11-20 09:25:16.649798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.515 ms 00:28:21.775 [2024-11-20 09:25:16.649809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.775 [2024-11-20 09:25:16.673590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.775 [2024-11-20 09:25:16.673637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:21.775 [2024-11-20 09:25:16.673672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.747 ms 00:28:21.775 [2024-11-20 09:25:16.673686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.775 [2024-11-20 09:25:16.680374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.775 [2024-11-20 09:25:16.680405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:21.775 [2024-11-20 09:25:16.680435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.648 ms 00:28:21.776 [2024-11-20 09:25:16.680446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.776 [2024-11-20 09:25:16.712638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.776 [2024-11-20 09:25:16.712727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:21.776 [2024-11-20 09:25:16.712746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.105 ms 00:28:21.776 [2024-11-20 09:25:16.712759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.776 [2024-11-20 09:25:16.730182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.776 [2024-11-20 09:25:16.730260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:21.776 [2024-11-20 09:25:16.730280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.379 ms 00:28:21.776 [2024-11-20 09:25:16.730293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.776 [2024-11-20 09:25:16.836744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.776 [2024-11-20 09:25:16.836849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:21.776 [2024-11-20 09:25:16.836886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.400 ms 00:28:21.776 [2024-11-20 09:25:16.836899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.776 [2024-11-20 09:25:16.867909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.776 [2024-11-20 09:25:16.867963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.776 [2024-11-20 09:25:16.867997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.972 ms 00:28:21.776 [2024-11-20 09:25:16.868008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.035 [2024-11-20 09:25:16.896686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.035 [2024-11-20 09:25:16.896753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:22.035 [2024-11-20 09:25:16.896787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.638 ms 00:28:22.035 [2024-11-20 09:25:16.896799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.035 [2024-11-20 09:25:16.928284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.035 [2024-11-20 09:25:16.928609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:22.035 [2024-11-20 09:25:16.928641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.439 ms 00:28:22.035 [2024-11-20 09:25:16.928675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.035 [2024-11-20 09:25:16.958740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.035 [2024-11-20 09:25:16.958790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:22.035 [2024-11-20 09:25:16.958810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.939 ms 00:28:22.035 [2024-11-20 09:25:16.958822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.035 [2024-11-20 09:25:16.958867] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:22.035 [2024-11-20 09:25:16.958893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:28:22.035 [2024-11-20 09:25:16.958910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.958923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.958935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.958948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.958962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.958974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.958987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:22.035 [2024-11-20 09:25:16.959465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.959994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:22.036 [2024-11-20 09:25:16.960228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:22.036 [2024-11-20 09:25:16.960241] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 26796cd3-a23c-498c-9a73-5c5d333b72c6 00:28:22.036 [2024-11-20 09:25:16.960253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:28:22.036 [2024-11-20 09:25:16.960266] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:28:22.036 [2024-11-20 09:25:16.960277] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:28:22.036 [2024-11-20 09:25:16.960299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:28:22.036 [2024-11-20 09:25:16.960311] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:22.036 [2024-11-20 09:25:16.960330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:22.036 [2024-11-20 09:25:16.960354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:22.036 [2024-11-20 09:25:16.960375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:22.036 [2024-11-20 09:25:16.960385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:22.036 [2024-11-20 09:25:16.960397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.036 [2024-11-20 09:25:16.960416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:22.036 [2024-11-20 09:25:16.960429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:28:22.036 [2024-11-20 09:25:16.960441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.036 [2024-11-20 09:25:16.976834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.036 [2024-11-20 09:25:16.976871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:22.036 [2024-11-20 09:25:16.976888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.349 ms 00:28:22.036 [2024-11-20 09:25:16.976908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.036 [2024-11-20 09:25:16.977358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.036 [2024-11-20 09:25:16.977388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:22.036 [2024-11-20 09:25:16.977403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:28:22.036 [2024-11-20 09:25:16.977414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.036 [2024-11-20 09:25:17.020237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.036 [2024-11-20 09:25:17.020297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.036 [2024-11-20 09:25:17.020321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.037 [2024-11-20 09:25:17.020334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.037 [2024-11-20 09:25:17.020412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.037 [2024-11-20 09:25:17.020429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.037 [2024-11-20 09:25:17.020442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.037 [2024-11-20 09:25:17.020454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.037 [2024-11-20 09:25:17.020544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.037 [2024-11-20 09:25:17.020564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.037 [2024-11-20 09:25:17.020577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.037 [2024-11-20 09:25:17.020597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.037 [2024-11-20 09:25:17.020621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.037 [2024-11-20 09:25:17.020636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.037 [2024-11-20 09:25:17.020672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.037 [2024-11-20 09:25:17.020686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.037 [2024-11-20 09:25:17.131912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.037 [2024-11-20 09:25:17.131990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.037 [2024-11-20 09:25:17.132034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.037 [2024-11-20 09:25:17.132046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.219863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.219959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.295 [2024-11-20 09:25:17.219995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.220143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.295 [2024-11-20 09:25:17.220156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.220236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.295 [2024-11-20 09:25:17.220249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.220396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.295 [2024-11-20 09:25:17.220408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.220510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:22.295 [2024-11-20 09:25:17.220522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.220645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.295 [2024-11-20 09:25:17.220658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.295 [2024-11-20 09:25:17.220777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.295 [2024-11-20 09:25:17.220791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.295 [2024-11-20 09:25:17.220803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.295 [2024-11-20 09:25:17.220976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.988 ms, result 0 00:28:23.666 00:28:23.666 00:28:23.666 09:25:18 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:23.666 [2024-11-20 09:25:18.781194] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:28:23.666 [2024-11-20 09:25:18.781350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80699 ] 00:28:23.924 [2024-11-20 09:25:18.956827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.181 [2024-11-20 09:25:19.088908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.439 [2024-11-20 09:25:19.441151] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.439 [2024-11-20 09:25:19.441233] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.697 [2024-11-20 09:25:19.603997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.604272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:24.697 [2024-11-20 09:25:19.604315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:24.697 [2024-11-20 09:25:19.604329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.604401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.604419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.697 [2024-11-20 09:25:19.604437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:24.697 [2024-11-20 09:25:19.604450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.604481] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:24.697 [2024-11-20 09:25:19.605421] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:24.697 [2024-11-20 09:25:19.605466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.605482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.697 [2024-11-20 09:25:19.605495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:28:24.697 [2024-11-20 09:25:19.605507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.607462] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:24.697 [2024-11-20 09:25:19.624299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.624338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:24.697 [2024-11-20 09:25:19.624372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.838 ms 00:28:24.697 [2024-11-20 09:25:19.624384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.624459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.624479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:24.697 [2024-11-20 09:25:19.624492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:24.697 [2024-11-20 09:25:19.624503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.633197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.633437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.697 [2024-11-20 09:25:19.633464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.600 ms 00:28:24.697 [2024-11-20 09:25:19.633478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.633589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.633609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.697 [2024-11-20 09:25:19.633622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:24.697 [2024-11-20 09:25:19.633634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.633717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.633737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:24.697 [2024-11-20 09:25:19.633750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:24.697 [2024-11-20 09:25:19.633762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.633798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:24.697 [2024-11-20 09:25:19.638865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.638917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.697 [2024-11-20 09:25:19.638933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.076 ms 00:28:24.697 [2024-11-20 09:25:19.638950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.638989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.697 [2024-11-20 09:25:19.639004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:24.697 [2024-11-20 09:25:19.639016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:24.697 [2024-11-20 09:25:19.639027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.697 [2024-11-20 09:25:19.639106] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:24.697 [2024-11-20 09:25:19.639161] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:24.697 [2024-11-20 09:25:19.639209] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:24.698 [2024-11-20 09:25:19.639235] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:24.698 [2024-11-20 09:25:19.639344] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:24.698 [2024-11-20 09:25:19.639360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:24.698 [2024-11-20 09:25:19.639375] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:24.698 [2024-11-20 09:25:19.639390] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:24.698 [2024-11-20 09:25:19.639403] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:24.698 [2024-11-20 09:25:19.639416] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:24.698 [2024-11-20 09:25:19.639428] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:24.698 [2024-11-20 09:25:19.639439] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:24.698 [2024-11-20 09:25:19.639450] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:24.698 [2024-11-20 09:25:19.639469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.698 [2024-11-20 09:25:19.639481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:24.698 [2024-11-20 09:25:19.639494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:28:24.698 [2024-11-20 09:25:19.639505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.698 [2024-11-20 09:25:19.639603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.698 [2024-11-20 09:25:19.639618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:24.698 [2024-11-20 09:25:19.639631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:24.698 [2024-11-20 09:25:19.639642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.698 [2024-11-20 09:25:19.639811] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:24.698 [2024-11-20 09:25:19.639837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:24.698 [2024-11-20 09:25:19.639850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.698 [2024-11-20 09:25:19.639862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.639874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:24.698 [2024-11-20 09:25:19.639885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.639896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:24.698 [2024-11-20 09:25:19.639909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:24.698 [2024-11-20 09:25:19.639920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:24.698 [2024-11-20 09:25:19.639931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.698 [2024-11-20 09:25:19.639942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:24.698 [2024-11-20 09:25:19.639953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:24.698 [2024-11-20 09:25:19.639964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.698 [2024-11-20 09:25:19.639974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:24.698 [2024-11-20 09:25:19.639985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:24.698 [2024-11-20 09:25:19.640007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:24.698 [2024-11-20 09:25:19.640047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:24.698 [2024-11-20 09:25:19.640079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:24.698 [2024-11-20 09:25:19.640112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:24.698 [2024-11-20 09:25:19.640143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:24.698 [2024-11-20 09:25:19.640191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:24.698 [2024-11-20 09:25:19.640222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.698 [2024-11-20 09:25:19.640243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:24.698 [2024-11-20 09:25:19.640253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:24.698 [2024-11-20 09:25:19.640263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.698 [2024-11-20 09:25:19.640274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:24.698 [2024-11-20 09:25:19.640285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:24.698 [2024-11-20 09:25:19.640295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:24.698 [2024-11-20 09:25:19.640316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:24.698 [2024-11-20 09:25:19.640327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640338] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:24.698 [2024-11-20 09:25:19.640350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:24.698 [2024-11-20 09:25:19.640361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.698 [2024-11-20 09:25:19.640383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:24.698 [2024-11-20 09:25:19.640394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:24.698 [2024-11-20 09:25:19.640406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:24.698 [2024-11-20 09:25:19.640417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:24.698 [2024-11-20 09:25:19.640427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:24.698 [2024-11-20 09:25:19.640438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:24.698 [2024-11-20 09:25:19.640450] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:24.698 [2024-11-20 09:25:19.640464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.698 [2024-11-20 09:25:19.640476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:24.698 [2024-11-20 09:25:19.640488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:24.698 [2024-11-20 09:25:19.640499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:24.698 [2024-11-20 09:25:19.640511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:24.698 [2024-11-20 09:25:19.640522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:24.698 [2024-11-20 09:25:19.640533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:24.699 [2024-11-20 09:25:19.640560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:24.699 [2024-11-20 09:25:19.640571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:24.699 [2024-11-20 09:25:19.640582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:24.699 [2024-11-20 09:25:19.640593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:24.699 [2024-11-20 09:25:19.640604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:24.699 [2024-11-20 09:25:19.640616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:24.699 [2024-11-20 09:25:19.640628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:24.699 [2024-11-20 09:25:19.640640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:24.699 [2024-11-20 09:25:19.640651] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:24.699 [2024-11-20 09:25:19.640669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.699 [2024-11-20 09:25:19.640682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:24.699 [2024-11-20 09:25:19.640694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:24.699 [2024-11-20 09:25:19.641043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:24.699 [2024-11-20 09:25:19.641125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:24.699 [2024-11-20 09:25:19.641184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.641293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:24.699 [2024-11-20 09:25:19.641351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:28:24.699 [2024-11-20 09:25:19.641388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.680999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.681282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.699 [2024-11-20 09:25:19.681406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.428 ms 00:28:24.699 [2024-11-20 09:25:19.681456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.681699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.681840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:24.699 [2024-11-20 09:25:19.681963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:24.699 [2024-11-20 09:25:19.682102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.736170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.736412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.699 [2024-11-20 09:25:19.736531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.907 ms 00:28:24.699 [2024-11-20 09:25:19.736582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.736777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.736833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.699 [2024-11-20 09:25:19.737050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:24.699 [2024-11-20 09:25:19.737080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.737887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.738002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.699 [2024-11-20 09:25:19.738114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:28:24.699 [2024-11-20 09:25:19.738161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.738429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.738559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.699 [2024-11-20 09:25:19.738700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:28:24.699 [2024-11-20 09:25:19.738820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.758188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.758270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.699 [2024-11-20 09:25:19.758294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.322 ms 00:28:24.699 [2024-11-20 09:25:19.758307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.775287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:24.699 [2024-11-20 09:25:19.775340] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:24.699 [2024-11-20 09:25:19.775376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.775390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:24.699 [2024-11-20 09:25:19.775404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.930 ms 00:28:24.699 [2024-11-20 09:25:19.775417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.699 [2024-11-20 09:25:19.805441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.699 [2024-11-20 09:25:19.805505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:24.699 [2024-11-20 09:25:19.805538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.977 ms 00:28:24.699 [2024-11-20 09:25:19.805550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.820660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.820873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:24.957 [2024-11-20 09:25:19.820900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.063 ms 00:28:24.957 [2024-11-20 09:25:19.820914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.835609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.835682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:24.957 [2024-11-20 09:25:19.835701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.622 ms 00:28:24.957 [2024-11-20 09:25:19.835713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.836545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.836611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:24.957 [2024-11-20 09:25:19.836642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:28:24.957 [2024-11-20 09:25:19.836659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.912757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.912828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:24.957 [2024-11-20 09:25:19.912873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.040 ms 00:28:24.957 [2024-11-20 09:25:19.912886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.924584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:24.957 [2024-11-20 09:25:19.927787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.927823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:24.957 [2024-11-20 09:25:19.927857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.832 ms 00:28:24.957 [2024-11-20 09:25:19.927870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.928004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.928023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:24.957 [2024-11-20 09:25:19.928038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:24.957 [2024-11-20 09:25:19.928053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.957 [2024-11-20 09:25:19.930056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.957 [2024-11-20 09:25:19.930093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:24.957 [2024-11-20 09:25:19.930125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:28:24.957 [2024-11-20 09:25:19.930136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.958 [2024-11-20 09:25:19.930172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.958 [2024-11-20 09:25:19.930187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:24.958 [2024-11-20 09:25:19.930200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:24.958 [2024-11-20 09:25:19.930249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.958 [2024-11-20 09:25:19.930297] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:24.958 [2024-11-20 09:25:19.930318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.958 [2024-11-20 09:25:19.930348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:24.958 [2024-11-20 09:25:19.930377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:28:24.958 [2024-11-20 09:25:19.930389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.958 [2024-11-20 09:25:19.960849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.958 [2024-11-20 09:25:19.961069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:24.958 [2024-11-20 09:25:19.961097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.424 ms 00:28:24.958 [2024-11-20 09:25:19.961128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.958 [2024-11-20 09:25:19.961219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.958 [2024-11-20 09:25:19.961237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:24.958 [2024-11-20 09:25:19.961251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:24.958 [2024-11-20 09:25:19.961262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.958 [2024-11-20 09:25:19.964332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 358.923 ms, result 0 00:28:26.332  [2024-11-20T09:25:22.393Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T09:25:23.329Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-20T09:25:24.263Z] Copying: 74/1024 [MB] (26 MBps) [2024-11-20T09:25:25.197Z] Copying: 99/1024 [MB] (25 MBps) [2024-11-20T09:25:26.581Z] Copying: 124/1024 [MB] (25 MBps) [2024-11-20T09:25:27.518Z] Copying: 150/1024 [MB] (26 MBps) [2024-11-20T09:25:28.453Z] Copying: 176/1024 [MB] (25 MBps) [2024-11-20T09:25:29.389Z] Copying: 201/1024 [MB] (25 MBps) [2024-11-20T09:25:30.323Z] Copying: 227/1024 [MB] (25 MBps) [2024-11-20T09:25:31.263Z] Copying: 253/1024 [MB] (25 MBps) [2024-11-20T09:25:32.199Z] Copying: 277/1024 [MB] (24 MBps) [2024-11-20T09:25:33.574Z] Copying: 304/1024 [MB] (26 MBps) [2024-11-20T09:25:34.510Z] Copying: 328/1024 [MB] (24 MBps) [2024-11-20T09:25:35.447Z] Copying: 353/1024 [MB] (24 MBps) [2024-11-20T09:25:36.412Z] Copying: 376/1024 [MB] (23 MBps) [2024-11-20T09:25:37.347Z] Copying: 401/1024 [MB] (24 MBps) [2024-11-20T09:25:38.282Z] Copying: 427/1024 [MB] (25 MBps) [2024-11-20T09:25:39.217Z] Copying: 451/1024 [MB] (24 MBps) [2024-11-20T09:25:40.591Z] Copying: 477/1024 [MB] (25 MBps) [2024-11-20T09:25:41.528Z] Copying: 503/1024 [MB] (25 MBps) [2024-11-20T09:25:42.462Z] Copying: 527/1024 [MB] (24 MBps) [2024-11-20T09:25:43.398Z] Copying: 551/1024 [MB] (24 MBps) [2024-11-20T09:25:44.333Z] Copying: 573/1024 [MB] (21 MBps) [2024-11-20T09:25:45.267Z] Copying: 596/1024 [MB] (23 MBps) [2024-11-20T09:25:46.201Z] Copying: 621/1024 [MB] (24 MBps) [2024-11-20T09:25:47.201Z] Copying: 645/1024 [MB] (24 MBps) [2024-11-20T09:25:48.576Z] Copying: 671/1024 [MB] (26 MBps) [2024-11-20T09:25:49.511Z] Copying: 696/1024 [MB] (25 MBps) [2024-11-20T09:25:50.446Z] Copying: 723/1024 [MB] (27 MBps) [2024-11-20T09:25:51.382Z] Copying: 749/1024 [MB] (25 MBps) [2024-11-20T09:25:52.317Z] Copying: 775/1024 [MB] (26 MBps) [2024-11-20T09:25:53.252Z] Copying: 801/1024 [MB] (25 MBps) [2024-11-20T09:25:54.190Z] Copying: 829/1024 [MB] (27 MBps) [2024-11-20T09:25:55.564Z] Copying: 855/1024 [MB] (26 MBps) [2024-11-20T09:25:56.499Z] Copying: 882/1024 [MB] (26 MBps) [2024-11-20T09:25:57.431Z] Copying: 909/1024 [MB] (27 MBps) [2024-11-20T09:25:58.365Z] Copying: 936/1024 [MB] (26 MBps) [2024-11-20T09:25:59.303Z] Copying: 962/1024 [MB] (25 MBps) [2024-11-20T09:26:00.238Z] Copying: 987/1024 [MB] (25 MBps) [2024-11-20T09:26:00.804Z] Copying: 1013/1024 [MB] (26 MBps) [2024-11-20T09:26:00.804Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 09:26:00.593780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.684 [2024-11-20 09:26:00.593885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:05.684 [2024-11-20 09:26:00.593908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:05.684 [2024-11-20 09:26:00.593922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.684 [2024-11-20 09:26:00.593973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:05.684 [2024-11-20 09:26:00.598072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.684 [2024-11-20 09:26:00.598101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:05.684 [2024-11-20 09:26:00.598116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:29:05.684 [2024-11-20 09:26:00.598128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.684 [2024-11-20 09:26:00.598426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.684 [2024-11-20 09:26:00.598448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:05.684 [2024-11-20 09:26:00.598463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:29:05.684 [2024-11-20 09:26:00.598475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.684 [2024-11-20 09:26:00.603441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.684 [2024-11-20 09:26:00.603621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:05.684 [2024-11-20 09:26:00.603664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.936 ms 00:29:05.684 [2024-11-20 09:26:00.603680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.684 [2024-11-20 09:26:00.610957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.684 [2024-11-20 09:26:00.610989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:05.684 [2024-11-20 09:26:00.611004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.211 ms 00:29:05.684 [2024-11-20 09:26:00.611017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.685 [2024-11-20 09:26:00.643110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.685 [2024-11-20 09:26:00.643320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:05.685 [2024-11-20 09:26:00.643442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.998 ms 00:29:05.685 [2024-11-20 09:26:00.643494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.685 [2024-11-20 09:26:00.661268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.685 [2024-11-20 09:26:00.661462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:05.685 [2024-11-20 09:26:00.661590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.697 ms 00:29:05.685 [2024-11-20 09:26:00.661771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.685 [2024-11-20 09:26:00.771916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.685 [2024-11-20 09:26:00.772167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:05.685 [2024-11-20 09:26:00.772303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.059 ms 00:29:05.685 [2024-11-20 09:26:00.772356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.944 [2024-11-20 09:26:00.804712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.944 [2024-11-20 09:26:00.804947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:05.944 [2024-11-20 09:26:00.805093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.231 ms 00:29:05.944 [2024-11-20 09:26:00.805148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.944 [2024-11-20 09:26:00.835420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.944 [2024-11-20 09:26:00.835462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:05.944 [2024-11-20 09:26:00.835511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.113 ms 00:29:05.944 [2024-11-20 09:26:00.835524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.944 [2024-11-20 09:26:00.865889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.944 [2024-11-20 09:26:00.865948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:05.944 [2024-11-20 09:26:00.865982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.319 ms 00:29:05.944 [2024-11-20 09:26:00.865995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.944 [2024-11-20 09:26:00.896731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.944 [2024-11-20 09:26:00.896788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:05.944 [2024-11-20 09:26:00.896823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.584 ms 00:29:05.944 [2024-11-20 09:26:00.896835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.944 [2024-11-20 09:26:00.896882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:05.944 [2024-11-20 09:26:00.896906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:29:05.944 [2024-11-20 09:26:00.896922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.896934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.896948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.896960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.896972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.896985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.896998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.897010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:05.944 [2024-11-20 09:26:00.897022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.897990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:05.945 [2024-11-20 09:26:00.898298] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:05.945 [2024-11-20 09:26:00.898320] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 26796cd3-a23c-498c-9a73-5c5d333b72c6 00:29:05.945 [2024-11-20 09:26:00.898334] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:29:05.945 [2024-11-20 09:26:00.898346] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13760 00:29:05.945 [2024-11-20 09:26:00.898358] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12800 00:29:05.945 [2024-11-20 09:26:00.898371] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0750 00:29:05.945 [2024-11-20 09:26:00.898383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:05.945 [2024-11-20 09:26:00.898404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:05.945 [2024-11-20 09:26:00.898424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:05.945 [2024-11-20 09:26:00.898459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:05.945 [2024-11-20 09:26:00.898479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:05.945 [2024-11-20 09:26:00.898492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.945 [2024-11-20 09:26:00.898504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:05.945 [2024-11-20 09:26:00.898517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:29:05.945 [2024-11-20 09:26:00.898529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.945 [2024-11-20 09:26:00.916191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.945 [2024-11-20 09:26:00.916238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:05.945 [2024-11-20 09:26:00.916273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.612 ms 00:29:05.945 [2024-11-20 09:26:00.916295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.945 [2024-11-20 09:26:00.916830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:05.945 [2024-11-20 09:26:00.916861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:05.945 [2024-11-20 09:26:00.916877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:29:05.945 [2024-11-20 09:26:00.916890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.945 [2024-11-20 09:26:00.962107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.945 [2024-11-20 09:26:00.962170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:05.945 [2024-11-20 09:26:00.962220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.945 [2024-11-20 09:26:00.962251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.945 [2024-11-20 09:26:00.962340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.945 [2024-11-20 09:26:00.962357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:05.945 [2024-11-20 09:26:00.962370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.945 [2024-11-20 09:26:00.962382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.945 [2024-11-20 09:26:00.962477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.945 [2024-11-20 09:26:00.962498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:05.945 [2024-11-20 09:26:00.962511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.945 [2024-11-20 09:26:00.962531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:05.945 [2024-11-20 09:26:00.962571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:05.945 [2024-11-20 09:26:00.962596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:05.945 [2024-11-20 09:26:00.962608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:05.945 [2024-11-20 09:26:00.962620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.074562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.074699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:06.204 [2024-11-20 09:26:01.074729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.074742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.161624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.161728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:06.204 [2024-11-20 09:26:01.161750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.161763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.161875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.161893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.204 [2024-11-20 09:26:01.161907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.161919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.161976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.161991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.204 [2024-11-20 09:26:01.162004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.162017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.162144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.162164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.204 [2024-11-20 09:26:01.162177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.162189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.162256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.162277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:06.204 [2024-11-20 09:26:01.162291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.162302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.162358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.162381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.204 [2024-11-20 09:26:01.162394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.162406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.162470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.204 [2024-11-20 09:26:01.162488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.204 [2024-11-20 09:26:01.162502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.204 [2024-11-20 09:26:01.162514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.204 [2024-11-20 09:26:01.162693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 568.853 ms, result 0 00:29:07.140 00:29:07.140 00:29:07.140 09:26:02 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:09.671 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79126 00:29:09.671 09:26:04 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79126 ']' 00:29:09.671 09:26:04 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79126 00:29:09.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79126) - No such process 00:29:09.671 Process with pid 79126 is not found 00:29:09.671 09:26:04 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79126 is not found' 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:29:09.671 Remove shared memory files 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:09.671 09:26:04 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:29:09.671 ************************************ 00:29:09.671 END TEST ftl_restore 00:29:09.671 ************************************ 00:29:09.671 00:29:09.671 real 3m22.569s 00:29:09.671 user 3m7.848s 00:29:09.671 sys 0m17.839s 00:29:09.671 09:26:04 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.671 09:26:04 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:29:09.671 09:26:04 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:09.671 09:26:04 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:09.671 09:26:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:09.671 09:26:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:09.671 ************************************ 00:29:09.671 START TEST ftl_dirty_shutdown 00:29:09.671 ************************************ 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:29:09.671 * Looking for test storage... 00:29:09.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:09.671 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.930 --rc genhtml_branch_coverage=1 00:29:09.930 --rc genhtml_function_coverage=1 00:29:09.930 --rc genhtml_legend=1 00:29:09.930 --rc geninfo_all_blocks=1 00:29:09.930 --rc geninfo_unexecuted_blocks=1 00:29:09.930 00:29:09.930 ' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.930 --rc genhtml_branch_coverage=1 00:29:09.930 --rc genhtml_function_coverage=1 00:29:09.930 --rc genhtml_legend=1 00:29:09.930 --rc geninfo_all_blocks=1 00:29:09.930 --rc geninfo_unexecuted_blocks=1 00:29:09.930 00:29:09.930 ' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.930 --rc genhtml_branch_coverage=1 00:29:09.930 --rc genhtml_function_coverage=1 00:29:09.930 --rc genhtml_legend=1 00:29:09.930 --rc geninfo_all_blocks=1 00:29:09.930 --rc geninfo_unexecuted_blocks=1 00:29:09.930 00:29:09.930 ' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:09.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.930 --rc genhtml_branch_coverage=1 00:29:09.930 --rc genhtml_function_coverage=1 00:29:09.930 --rc genhtml_legend=1 00:29:09.930 --rc geninfo_all_blocks=1 00:29:09.930 --rc geninfo_unexecuted_blocks=1 00:29:09.930 00:29:09.930 ' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:09.930 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81222 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81222 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81222 ']' 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.931 09:26:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:09.931 [2024-11-20 09:26:04.933035] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:09.931 [2024-11-20 09:26:04.933541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81222 ] 00:29:10.190 [2024-11-20 09:26:05.119121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.190 [2024-11-20 09:26:05.276521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:11.127 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:11.694 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:11.695 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:11.953 { 00:29:11.953 "name": "nvme0n1", 00:29:11.953 "aliases": [ 00:29:11.953 "a363c67d-1f30-426f-b3e1-2c484da49171" 00:29:11.953 ], 00:29:11.953 "product_name": "NVMe disk", 00:29:11.953 "block_size": 4096, 00:29:11.953 "num_blocks": 1310720, 00:29:11.953 "uuid": "a363c67d-1f30-426f-b3e1-2c484da49171", 00:29:11.953 "numa_id": -1, 00:29:11.953 "assigned_rate_limits": { 00:29:11.953 "rw_ios_per_sec": 0, 00:29:11.953 "rw_mbytes_per_sec": 0, 00:29:11.953 "r_mbytes_per_sec": 0, 00:29:11.953 "w_mbytes_per_sec": 0 00:29:11.953 }, 00:29:11.953 "claimed": true, 00:29:11.953 "claim_type": "read_many_write_one", 00:29:11.953 "zoned": false, 00:29:11.953 "supported_io_types": { 00:29:11.953 "read": true, 00:29:11.953 "write": true, 00:29:11.953 "unmap": true, 00:29:11.953 "flush": true, 00:29:11.953 "reset": true, 00:29:11.953 "nvme_admin": true, 00:29:11.953 "nvme_io": true, 00:29:11.953 "nvme_io_md": false, 00:29:11.953 "write_zeroes": true, 00:29:11.953 "zcopy": false, 00:29:11.953 "get_zone_info": false, 00:29:11.953 "zone_management": false, 00:29:11.953 "zone_append": false, 00:29:11.953 "compare": true, 00:29:11.953 "compare_and_write": false, 00:29:11.953 "abort": true, 00:29:11.953 "seek_hole": false, 00:29:11.953 "seek_data": false, 00:29:11.953 "copy": true, 00:29:11.953 "nvme_iov_md": false 00:29:11.953 }, 00:29:11.953 "driver_specific": { 00:29:11.953 "nvme": [ 00:29:11.953 { 00:29:11.953 "pci_address": "0000:00:11.0", 00:29:11.953 "trid": { 00:29:11.953 "trtype": "PCIe", 00:29:11.953 "traddr": "0000:00:11.0" 00:29:11.953 }, 00:29:11.953 "ctrlr_data": { 00:29:11.953 "cntlid": 0, 00:29:11.953 "vendor_id": "0x1b36", 00:29:11.953 "model_number": "QEMU NVMe Ctrl", 00:29:11.953 "serial_number": "12341", 00:29:11.953 "firmware_revision": "8.0.0", 00:29:11.953 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:11.953 "oacs": { 00:29:11.953 "security": 0, 00:29:11.953 "format": 1, 00:29:11.953 "firmware": 0, 00:29:11.953 "ns_manage": 1 00:29:11.953 }, 00:29:11.953 "multi_ctrlr": false, 00:29:11.953 "ana_reporting": false 00:29:11.953 }, 00:29:11.953 "vs": { 00:29:11.953 "nvme_version": "1.4" 00:29:11.953 }, 00:29:11.953 "ns_data": { 00:29:11.953 "id": 1, 00:29:11.953 "can_share": false 00:29:11.953 } 00:29:11.953 } 00:29:11.953 ], 00:29:11.953 "mp_policy": "active_passive" 00:29:11.953 } 00:29:11.953 } 00:29:11.953 ]' 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:11.953 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:11.954 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:11.954 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:11.954 09:26:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:12.212 09:26:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7 00:29:12.212 09:26:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:12.212 09:26:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ac9cc12-e0de-4c39-a0a4-9099a9b58fb7 00:29:12.778 09:26:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:12.778 09:26:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=8416b239-9e3d-4710-acc7-263f107fe342 00:29:12.778 09:26:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8416b239-9e3d-4710-acc7-263f107fe342 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:13.345 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:13.603 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:13.603 { 00:29:13.604 "name": "d66fad95-3260-4cc6-93a6-775fd4650dc6", 00:29:13.604 "aliases": [ 00:29:13.604 "lvs/nvme0n1p0" 00:29:13.604 ], 00:29:13.604 "product_name": "Logical Volume", 00:29:13.604 "block_size": 4096, 00:29:13.604 "num_blocks": 26476544, 00:29:13.604 "uuid": "d66fad95-3260-4cc6-93a6-775fd4650dc6", 00:29:13.604 "assigned_rate_limits": { 00:29:13.604 "rw_ios_per_sec": 0, 00:29:13.604 "rw_mbytes_per_sec": 0, 00:29:13.604 "r_mbytes_per_sec": 0, 00:29:13.604 "w_mbytes_per_sec": 0 00:29:13.604 }, 00:29:13.604 "claimed": false, 00:29:13.604 "zoned": false, 00:29:13.604 "supported_io_types": { 00:29:13.604 "read": true, 00:29:13.604 "write": true, 00:29:13.604 "unmap": true, 00:29:13.604 "flush": false, 00:29:13.604 "reset": true, 00:29:13.604 "nvme_admin": false, 00:29:13.604 "nvme_io": false, 00:29:13.604 "nvme_io_md": false, 00:29:13.604 "write_zeroes": true, 00:29:13.604 "zcopy": false, 00:29:13.604 "get_zone_info": false, 00:29:13.604 "zone_management": false, 00:29:13.604 "zone_append": false, 00:29:13.604 "compare": false, 00:29:13.604 "compare_and_write": false, 00:29:13.604 "abort": false, 00:29:13.604 "seek_hole": true, 00:29:13.604 "seek_data": true, 00:29:13.604 "copy": false, 00:29:13.604 "nvme_iov_md": false 00:29:13.604 }, 00:29:13.604 "driver_specific": { 00:29:13.604 "lvol": { 00:29:13.604 "lvol_store_uuid": "8416b239-9e3d-4710-acc7-263f107fe342", 00:29:13.604 "base_bdev": "nvme0n1", 00:29:13.604 "thin_provision": true, 00:29:13.604 "num_allocated_clusters": 0, 00:29:13.604 "snapshot": false, 00:29:13.604 "clone": false, 00:29:13.604 "esnap_clone": false 00:29:13.604 } 00:29:13.604 } 00:29:13.604 } 00:29:13.604 ]' 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:13.604 09:26:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:14.173 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:14.431 { 00:29:14.431 "name": "d66fad95-3260-4cc6-93a6-775fd4650dc6", 00:29:14.431 "aliases": [ 00:29:14.431 "lvs/nvme0n1p0" 00:29:14.431 ], 00:29:14.431 "product_name": "Logical Volume", 00:29:14.431 "block_size": 4096, 00:29:14.431 "num_blocks": 26476544, 00:29:14.431 "uuid": "d66fad95-3260-4cc6-93a6-775fd4650dc6", 00:29:14.431 "assigned_rate_limits": { 00:29:14.431 "rw_ios_per_sec": 0, 00:29:14.431 "rw_mbytes_per_sec": 0, 00:29:14.431 "r_mbytes_per_sec": 0, 00:29:14.431 "w_mbytes_per_sec": 0 00:29:14.431 }, 00:29:14.431 "claimed": false, 00:29:14.431 "zoned": false, 00:29:14.431 "supported_io_types": { 00:29:14.431 "read": true, 00:29:14.431 "write": true, 00:29:14.431 "unmap": true, 00:29:14.431 "flush": false, 00:29:14.431 "reset": true, 00:29:14.431 "nvme_admin": false, 00:29:14.431 "nvme_io": false, 00:29:14.431 "nvme_io_md": false, 00:29:14.431 "write_zeroes": true, 00:29:14.431 "zcopy": false, 00:29:14.431 "get_zone_info": false, 00:29:14.431 "zone_management": false, 00:29:14.431 "zone_append": false, 00:29:14.431 "compare": false, 00:29:14.431 "compare_and_write": false, 00:29:14.431 "abort": false, 00:29:14.431 "seek_hole": true, 00:29:14.431 "seek_data": true, 00:29:14.431 "copy": false, 00:29:14.431 "nvme_iov_md": false 00:29:14.431 }, 00:29:14.431 "driver_specific": { 00:29:14.431 "lvol": { 00:29:14.431 "lvol_store_uuid": "8416b239-9e3d-4710-acc7-263f107fe342", 00:29:14.431 "base_bdev": "nvme0n1", 00:29:14.431 "thin_provision": true, 00:29:14.431 "num_allocated_clusters": 0, 00:29:14.431 "snapshot": false, 00:29:14.431 "clone": false, 00:29:14.431 "esnap_clone": false 00:29:14.431 } 00:29:14.431 } 00:29:14.431 } 00:29:14.431 ]' 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:29:14.431 09:26:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:14.690 09:26:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d66fad95-3260-4cc6-93a6-775fd4650dc6 00:29:14.948 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:14.948 { 00:29:14.948 "name": "d66fad95-3260-4cc6-93a6-775fd4650dc6", 00:29:14.948 "aliases": [ 00:29:14.948 "lvs/nvme0n1p0" 00:29:14.948 ], 00:29:14.948 "product_name": "Logical Volume", 00:29:14.948 "block_size": 4096, 00:29:14.948 "num_blocks": 26476544, 00:29:14.948 "uuid": "d66fad95-3260-4cc6-93a6-775fd4650dc6", 00:29:14.948 "assigned_rate_limits": { 00:29:14.948 "rw_ios_per_sec": 0, 00:29:14.948 "rw_mbytes_per_sec": 0, 00:29:14.948 "r_mbytes_per_sec": 0, 00:29:14.948 "w_mbytes_per_sec": 0 00:29:14.948 }, 00:29:14.948 "claimed": false, 00:29:14.948 "zoned": false, 00:29:14.948 "supported_io_types": { 00:29:14.948 "read": true, 00:29:14.948 "write": true, 00:29:14.948 "unmap": true, 00:29:14.948 "flush": false, 00:29:14.948 "reset": true, 00:29:14.948 "nvme_admin": false, 00:29:14.948 "nvme_io": false, 00:29:14.948 "nvme_io_md": false, 00:29:14.948 "write_zeroes": true, 00:29:14.948 "zcopy": false, 00:29:14.948 "get_zone_info": false, 00:29:14.948 "zone_management": false, 00:29:14.948 "zone_append": false, 00:29:14.948 "compare": false, 00:29:14.948 "compare_and_write": false, 00:29:14.948 "abort": false, 00:29:14.948 "seek_hole": true, 00:29:14.948 "seek_data": true, 00:29:14.948 "copy": false, 00:29:14.948 "nvme_iov_md": false 00:29:14.948 }, 00:29:14.948 "driver_specific": { 00:29:14.948 "lvol": { 00:29:14.948 "lvol_store_uuid": "8416b239-9e3d-4710-acc7-263f107fe342", 00:29:14.948 "base_bdev": "nvme0n1", 00:29:14.948 "thin_provision": true, 00:29:14.948 "num_allocated_clusters": 0, 00:29:14.948 "snapshot": false, 00:29:14.948 "clone": false, 00:29:14.948 "esnap_clone": false 00:29:14.948 } 00:29:14.948 } 00:29:14.948 } 00:29:14.948 ]' 00:29:14.948 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d66fad95-3260-4cc6-93a6-775fd4650dc6 --l2p_dram_limit 10' 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:15.207 09:26:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d66fad95-3260-4cc6-93a6-775fd4650dc6 --l2p_dram_limit 10 -c nvc0n1p0 00:29:15.467 [2024-11-20 09:26:10.418003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.418077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:15.467 [2024-11-20 09:26:10.418136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:15.467 [2024-11-20 09:26:10.418149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.418265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.418286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:15.467 [2024-11-20 09:26:10.418303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:15.467 [2024-11-20 09:26:10.418315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.418358] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:15.467 [2024-11-20 09:26:10.419502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:15.467 [2024-11-20 09:26:10.419734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.419756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:15.467 [2024-11-20 09:26:10.419774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:29:15.467 [2024-11-20 09:26:10.419786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.419947] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88b217de-e499-40b3-9776-0cde79366cd3 00:29:15.467 [2024-11-20 09:26:10.421932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.421980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:15.467 [2024-11-20 09:26:10.421997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:29:15.467 [2024-11-20 09:26:10.422012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.432840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.432935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:15.467 [2024-11-20 09:26:10.432961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.740 ms 00:29:15.467 [2024-11-20 09:26:10.432976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.433166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.433190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:15.467 [2024-11-20 09:26:10.433204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:15.467 [2024-11-20 09:26:10.433224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.433326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.433348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:15.467 [2024-11-20 09:26:10.433377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:15.467 [2024-11-20 09:26:10.433394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.433429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:15.467 [2024-11-20 09:26:10.439352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.439543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:15.467 [2024-11-20 09:26:10.439582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.929 ms 00:29:15.467 [2024-11-20 09:26:10.439596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.439675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.439695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:15.467 [2024-11-20 09:26:10.439711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:15.467 [2024-11-20 09:26:10.439724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.439795] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:15.467 [2024-11-20 09:26:10.439972] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:15.467 [2024-11-20 09:26:10.440013] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:15.467 [2024-11-20 09:26:10.440029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:15.467 [2024-11-20 09:26:10.440046] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440060] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440075] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:15.467 [2024-11-20 09:26:10.440086] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:15.467 [2024-11-20 09:26:10.440102] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:15.467 [2024-11-20 09:26:10.440113] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:15.467 [2024-11-20 09:26:10.440127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.440138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:15.467 [2024-11-20 09:26:10.440152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:29:15.467 [2024-11-20 09:26:10.440208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.440307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.467 [2024-11-20 09:26:10.440322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:15.467 [2024-11-20 09:26:10.440337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:15.467 [2024-11-20 09:26:10.440349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.467 [2024-11-20 09:26:10.440491] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:15.467 [2024-11-20 09:26:10.440525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:15.467 [2024-11-20 09:26:10.440555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:15.467 [2024-11-20 09:26:10.440592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:15.467 [2024-11-20 09:26:10.440630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:15.467 [2024-11-20 09:26:10.440654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:15.467 [2024-11-20 09:26:10.440665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:15.467 [2024-11-20 09:26:10.440678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:15.467 [2024-11-20 09:26:10.440689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:15.467 [2024-11-20 09:26:10.440703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:15.467 [2024-11-20 09:26:10.440729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:15.467 [2024-11-20 09:26:10.440755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:15.467 [2024-11-20 09:26:10.440812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:15.467 [2024-11-20 09:26:10.440862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:15.467 [2024-11-20 09:26:10.440919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:15.467 [2024-11-20 09:26:10.440955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:15.467 [2024-11-20 09:26:10.440968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:15.467 [2024-11-20 09:26:10.440979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:15.467 [2024-11-20 09:26:10.440995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:15.467 [2024-11-20 09:26:10.441007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:15.467 [2024-11-20 09:26:10.441020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:15.467 [2024-11-20 09:26:10.441031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:15.467 [2024-11-20 09:26:10.441045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:15.467 [2024-11-20 09:26:10.441056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:15.468 [2024-11-20 09:26:10.441070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:15.468 [2024-11-20 09:26:10.441081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.468 [2024-11-20 09:26:10.441095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:15.468 [2024-11-20 09:26:10.441107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:15.468 [2024-11-20 09:26:10.441120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.468 [2024-11-20 09:26:10.441131] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:15.468 [2024-11-20 09:26:10.441147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:15.468 [2024-11-20 09:26:10.441158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:15.468 [2024-11-20 09:26:10.441175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:15.468 [2024-11-20 09:26:10.441202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:15.468 [2024-11-20 09:26:10.441234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:15.468 [2024-11-20 09:26:10.441245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:15.468 [2024-11-20 09:26:10.441259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:15.468 [2024-11-20 09:26:10.441269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:15.468 [2024-11-20 09:26:10.441283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:15.468 [2024-11-20 09:26:10.441299] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:15.468 [2024-11-20 09:26:10.441315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:15.468 [2024-11-20 09:26:10.441345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:15.468 [2024-11-20 09:26:10.441357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:15.468 [2024-11-20 09:26:10.441372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:15.468 [2024-11-20 09:26:10.441400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:15.468 [2024-11-20 09:26:10.441414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:15.468 [2024-11-20 09:26:10.441426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:15.468 [2024-11-20 09:26:10.441440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:15.468 [2024-11-20 09:26:10.441452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:15.468 [2024-11-20 09:26:10.441469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:15.468 [2024-11-20 09:26:10.441534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:15.468 [2024-11-20 09:26:10.441550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:15.468 [2024-11-20 09:26:10.441578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:15.468 [2024-11-20 09:26:10.441590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:15.468 [2024-11-20 09:26:10.441622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:15.468 [2024-11-20 09:26:10.441652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:15.468 [2024-11-20 09:26:10.441667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:15.468 [2024-11-20 09:26:10.441680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:29:15.468 [2024-11-20 09:26:10.441694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:15.468 [2024-11-20 09:26:10.441778] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:15.468 [2024-11-20 09:26:10.441804] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:18.752 [2024-11-20 09:26:13.253142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.253551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:18.752 [2024-11-20 09:26:13.253720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2811.388 ms 00:29:18.752 [2024-11-20 09:26:13.253783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.294833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.295243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:18.752 [2024-11-20 09:26:13.295380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.643 ms 00:29:18.752 [2024-11-20 09:26:13.295438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.295857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.296038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:18.752 [2024-11-20 09:26:13.296156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:18.752 [2024-11-20 09:26:13.296282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.341464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.341889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:18.752 [2024-11-20 09:26:13.342030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.059 ms 00:29:18.752 [2024-11-20 09:26:13.342176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.342306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.342458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:18.752 [2024-11-20 09:26:13.342571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:18.752 [2024-11-20 09:26:13.342627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.343398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.343537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:18.752 [2024-11-20 09:26:13.343662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:29:18.752 [2024-11-20 09:26:13.343792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.343994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.344049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:18.752 [2024-11-20 09:26:13.344153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:29:18.752 [2024-11-20 09:26:13.344185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.366130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.366202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:18.752 [2024-11-20 09:26:13.366269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.906 ms 00:29:18.752 [2024-11-20 09:26:13.366286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.381056] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:18.752 [2024-11-20 09:26:13.385627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.385704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:18.752 [2024-11-20 09:26:13.385730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.187 ms 00:29:18.752 [2024-11-20 09:26:13.385744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.752 [2024-11-20 09:26:13.483733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.752 [2024-11-20 09:26:13.483815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:18.753 [2024-11-20 09:26:13.483859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.911 ms 00:29:18.753 [2024-11-20 09:26:13.483873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.484163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.484189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:18.753 [2024-11-20 09:26:13.484210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:29:18.753 [2024-11-20 09:26:13.484222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.517173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.517259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:18.753 [2024-11-20 09:26:13.517303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.843 ms 00:29:18.753 [2024-11-20 09:26:13.517316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.548939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.549327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:18.753 [2024-11-20 09:26:13.549369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.517 ms 00:29:18.753 [2024-11-20 09:26:13.549384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.550401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.550433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:18.753 [2024-11-20 09:26:13.550452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:29:18.753 [2024-11-20 09:26:13.550465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.646670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.646755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:18.753 [2024-11-20 09:26:13.646787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.035 ms 00:29:18.753 [2024-11-20 09:26:13.646802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.684792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.684897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:18.753 [2024-11-20 09:26:13.684940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.734 ms 00:29:18.753 [2024-11-20 09:26:13.684953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.719925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.720009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:18.753 [2024-11-20 09:26:13.720036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.835 ms 00:29:18.753 [2024-11-20 09:26:13.720049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.754330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.754420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:18.753 [2024-11-20 09:26:13.754447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.163 ms 00:29:18.753 [2024-11-20 09:26:13.754462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.754565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.754585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:18.753 [2024-11-20 09:26:13.754609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:18.753 [2024-11-20 09:26:13.754622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.754843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:18.753 [2024-11-20 09:26:13.754866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:18.753 [2024-11-20 09:26:13.754888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:18.753 [2024-11-20 09:26:13.754900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:18.753 [2024-11-20 09:26:13.756402] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3337.799 ms, result 0 00:29:18.753 { 00:29:18.753 "name": "ftl0", 00:29:18.753 "uuid": "88b217de-e499-40b3-9776-0cde79366cd3" 00:29:18.753 } 00:29:18.753 09:26:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:29:18.753 09:26:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:19.011 09:26:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:29:19.011 09:26:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:29:19.011 09:26:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:29:19.579 /dev/nbd0 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:29:19.579 1+0 records in 00:29:19.579 1+0 records out 00:29:19.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047328 s, 8.7 MB/s 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:29:19.579 09:26:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:29:19.579 [2024-11-20 09:26:14.617041] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:19.579 [2024-11-20 09:26:14.617262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81370 ] 00:29:19.837 [2024-11-20 09:26:14.812770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.095 [2024-11-20 09:26:14.983877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.470  [2024-11-20T09:26:17.527Z] Copying: 154/1024 [MB] (154 MBps) [2024-11-20T09:26:18.462Z] Copying: 304/1024 [MB] (150 MBps) [2024-11-20T09:26:19.397Z] Copying: 461/1024 [MB] (156 MBps) [2024-11-20T09:26:20.771Z] Copying: 622/1024 [MB] (161 MBps) [2024-11-20T09:26:21.707Z] Copying: 766/1024 [MB] (144 MBps) [2024-11-20T09:26:22.274Z] Copying: 909/1024 [MB] (142 MBps) [2024-11-20T09:26:23.208Z] Copying: 1024/1024 [MB] (average 151 MBps) 00:29:28.088 00:29:28.346 09:26:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:30.889 09:26:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:29:30.889 [2024-11-20 09:26:25.623057] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:30.889 [2024-11-20 09:26:25.623274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81481 ] 00:29:30.889 [2024-11-20 09:26:25.813944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.889 [2024-11-20 09:26:25.975086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.263  [2024-11-20T09:26:28.318Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-20T09:26:29.755Z] Copying: 29/1024 [MB] (15 MBps) [2024-11-20T09:26:30.322Z] Copying: 44/1024 [MB] (15 MBps) [2024-11-20T09:26:31.695Z] Copying: 59/1024 [MB] (15 MBps) [2024-11-20T09:26:32.657Z] Copying: 72/1024 [MB] (13 MBps) [2024-11-20T09:26:33.587Z] Copying: 86/1024 [MB] (13 MBps) [2024-11-20T09:26:34.521Z] Copying: 100/1024 [MB] (14 MBps) [2024-11-20T09:26:35.455Z] Copying: 114/1024 [MB] (14 MBps) [2024-11-20T09:26:36.391Z] Copying: 129/1024 [MB] (14 MBps) [2024-11-20T09:26:37.327Z] Copying: 144/1024 [MB] (14 MBps) [2024-11-20T09:26:38.706Z] Copying: 159/1024 [MB] (15 MBps) [2024-11-20T09:26:39.641Z] Copying: 175/1024 [MB] (15 MBps) [2024-11-20T09:26:40.576Z] Copying: 188/1024 [MB] (13 MBps) [2024-11-20T09:26:41.512Z] Copying: 204/1024 [MB] (15 MBps) [2024-11-20T09:26:42.448Z] Copying: 219/1024 [MB] (15 MBps) [2024-11-20T09:26:43.384Z] Copying: 235/1024 [MB] (15 MBps) [2024-11-20T09:26:44.319Z] Copying: 251/1024 [MB] (15 MBps) [2024-11-20T09:26:45.695Z] Copying: 267/1024 [MB] (16 MBps) [2024-11-20T09:26:46.630Z] Copying: 283/1024 [MB] (16 MBps) [2024-11-20T09:26:47.565Z] Copying: 299/1024 [MB] (16 MBps) [2024-11-20T09:26:48.501Z] Copying: 315/1024 [MB] (16 MBps) [2024-11-20T09:26:49.436Z] Copying: 331/1024 [MB] (15 MBps) [2024-11-20T09:26:50.371Z] Copying: 347/1024 [MB] (15 MBps) [2024-11-20T09:26:51.388Z] Copying: 362/1024 [MB] (15 MBps) [2024-11-20T09:26:52.323Z] Copying: 378/1024 [MB] (15 MBps) [2024-11-20T09:26:53.698Z] Copying: 394/1024 [MB] (15 MBps) [2024-11-20T09:26:54.633Z] Copying: 409/1024 [MB] (15 MBps) [2024-11-20T09:26:55.567Z] Copying: 425/1024 [MB] (15 MBps) [2024-11-20T09:26:56.549Z] Copying: 441/1024 [MB] (16 MBps) [2024-11-20T09:26:57.482Z] Copying: 457/1024 [MB] (15 MBps) [2024-11-20T09:26:58.413Z] Copying: 473/1024 [MB] (16 MBps) [2024-11-20T09:26:59.345Z] Copying: 489/1024 [MB] (16 MBps) [2024-11-20T09:27:00.722Z] Copying: 505/1024 [MB] (15 MBps) [2024-11-20T09:27:01.654Z] Copying: 521/1024 [MB] (15 MBps) [2024-11-20T09:27:02.586Z] Copying: 537/1024 [MB] (16 MBps) [2024-11-20T09:27:03.519Z] Copying: 553/1024 [MB] (15 MBps) [2024-11-20T09:27:04.453Z] Copying: 569/1024 [MB] (15 MBps) [2024-11-20T09:27:05.385Z] Copying: 585/1024 [MB] (16 MBps) [2024-11-20T09:27:06.318Z] Copying: 601/1024 [MB] (16 MBps) [2024-11-20T09:27:07.690Z] Copying: 617/1024 [MB] (15 MBps) [2024-11-20T09:27:08.624Z] Copying: 632/1024 [MB] (15 MBps) [2024-11-20T09:27:09.559Z] Copying: 648/1024 [MB] (15 MBps) [2024-11-20T09:27:10.493Z] Copying: 664/1024 [MB] (15 MBps) [2024-11-20T09:27:11.427Z] Copying: 679/1024 [MB] (15 MBps) [2024-11-20T09:27:12.361Z] Copying: 695/1024 [MB] (15 MBps) [2024-11-20T09:27:13.734Z] Copying: 709/1024 [MB] (14 MBps) [2024-11-20T09:27:14.669Z] Copying: 725/1024 [MB] (15 MBps) [2024-11-20T09:27:15.604Z] Copying: 739/1024 [MB] (14 MBps) [2024-11-20T09:27:16.538Z] Copying: 753/1024 [MB] (14 MBps) [2024-11-20T09:27:17.471Z] Copying: 767/1024 [MB] (14 MBps) [2024-11-20T09:27:18.402Z] Copying: 782/1024 [MB] (14 MBps) [2024-11-20T09:27:19.347Z] Copying: 796/1024 [MB] (13 MBps) [2024-11-20T09:27:20.719Z] Copying: 810/1024 [MB] (14 MBps) [2024-11-20T09:27:21.652Z] Copying: 826/1024 [MB] (15 MBps) [2024-11-20T09:27:22.585Z] Copying: 843/1024 [MB] (17 MBps) [2024-11-20T09:27:23.518Z] Copying: 858/1024 [MB] (15 MBps) [2024-11-20T09:27:24.453Z] Copying: 873/1024 [MB] (15 MBps) [2024-11-20T09:27:25.468Z] Copying: 889/1024 [MB] (15 MBps) [2024-11-20T09:27:26.425Z] Copying: 904/1024 [MB] (15 MBps) [2024-11-20T09:27:27.356Z] Copying: 919/1024 [MB] (15 MBps) [2024-11-20T09:27:28.729Z] Copying: 935/1024 [MB] (15 MBps) [2024-11-20T09:27:29.664Z] Copying: 950/1024 [MB] (15 MBps) [2024-11-20T09:27:30.649Z] Copying: 966/1024 [MB] (15 MBps) [2024-11-20T09:27:31.585Z] Copying: 982/1024 [MB] (15 MBps) [2024-11-20T09:27:32.517Z] Copying: 998/1024 [MB] (16 MBps) [2024-11-20T09:27:33.084Z] Copying: 1015/1024 [MB] (16 MBps) [2024-11-20T09:27:34.019Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:30:38.899 00:30:38.899 09:27:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:30:38.899 09:27:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:30:39.463 09:27:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:39.463 [2024-11-20 09:27:34.544356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.463 [2024-11-20 09:27:34.544434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:39.463 [2024-11-20 09:27:34.544464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:39.463 [2024-11-20 09:27:34.544481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.463 [2024-11-20 09:27:34.544518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:39.463 [2024-11-20 09:27:34.548253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.463 [2024-11-20 09:27:34.548298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:39.463 [2024-11-20 09:27:34.548319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:30:39.463 [2024-11-20 09:27:34.548332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.463 [2024-11-20 09:27:34.550242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.463 [2024-11-20 09:27:34.550288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:39.463 [2024-11-20 09:27:34.550309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.852 ms 00:30:39.463 [2024-11-20 09:27:34.550322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.463 [2024-11-20 09:27:34.566139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.463 [2024-11-20 09:27:34.566337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:39.463 [2024-11-20 09:27:34.566376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.783 ms 00:30:39.463 [2024-11-20 09:27:34.566390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.463 [2024-11-20 09:27:34.572933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.463 [2024-11-20 09:27:34.573093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:39.463 [2024-11-20 09:27:34.573128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.485 ms 00:30:39.463 [2024-11-20 09:27:34.573142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.604544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.604771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:39.722 [2024-11-20 09:27:34.604809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.289 ms 00:30:39.722 [2024-11-20 09:27:34.604824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.623422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.623472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:39.722 [2024-11-20 09:27:34.623496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.523 ms 00:30:39.722 [2024-11-20 09:27:34.623513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.623740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.623779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:39.722 [2024-11-20 09:27:34.623798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:30:39.722 [2024-11-20 09:27:34.623810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.654044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.654089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:39.722 [2024-11-20 09:27:34.654110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.201 ms 00:30:39.722 [2024-11-20 09:27:34.654123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.684363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.684409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:39.722 [2024-11-20 09:27:34.684431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.183 ms 00:30:39.722 [2024-11-20 09:27:34.684444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.714364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.714411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:39.722 [2024-11-20 09:27:34.714433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.863 ms 00:30:39.722 [2024-11-20 09:27:34.714445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.744479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.722 [2024-11-20 09:27:34.744525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:39.722 [2024-11-20 09:27:34.744547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.889 ms 00:30:39.722 [2024-11-20 09:27:34.744560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.722 [2024-11-20 09:27:34.744615] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:39.723 [2024-11-20 09:27:34.744639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.744988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.745995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:39.723 [2024-11-20 09:27:34.746010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:39.724 [2024-11-20 09:27:34.746198] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:39.724 [2024-11-20 09:27:34.746213] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88b217de-e499-40b3-9776-0cde79366cd3 00:30:39.724 [2024-11-20 09:27:34.746236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:39.724 [2024-11-20 09:27:34.746256] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:39.724 [2024-11-20 09:27:34.746268] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:39.724 [2024-11-20 09:27:34.746286] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:39.724 [2024-11-20 09:27:34.746298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:39.724 [2024-11-20 09:27:34.746314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:39.724 [2024-11-20 09:27:34.746325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:39.724 [2024-11-20 09:27:34.746339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:39.724 [2024-11-20 09:27:34.746350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:39.724 [2024-11-20 09:27:34.746364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.724 [2024-11-20 09:27:34.746377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:39.724 [2024-11-20 09:27:34.746393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:30:39.724 [2024-11-20 09:27:34.746413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.724 [2024-11-20 09:27:34.763555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.724 [2024-11-20 09:27:34.763605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:39.724 [2024-11-20 09:27:34.763631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.070 ms 00:30:39.724 [2024-11-20 09:27:34.763644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.724 [2024-11-20 09:27:34.764142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.724 [2024-11-20 09:27:34.764166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:39.724 [2024-11-20 09:27:34.764183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:30:39.724 [2024-11-20 09:27:34.764195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.724 [2024-11-20 09:27:34.820933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.724 [2024-11-20 09:27:34.821016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:39.724 [2024-11-20 09:27:34.821041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.724 [2024-11-20 09:27:34.821054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.724 [2024-11-20 09:27:34.821150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.724 [2024-11-20 09:27:34.821166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:39.724 [2024-11-20 09:27:34.821182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.724 [2024-11-20 09:27:34.821195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.724 [2024-11-20 09:27:34.821345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.724 [2024-11-20 09:27:34.821367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:39.724 [2024-11-20 09:27:34.821386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.724 [2024-11-20 09:27:34.821398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.724 [2024-11-20 09:27:34.821432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.724 [2024-11-20 09:27:34.821446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:39.724 [2024-11-20 09:27:34.821462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.724 [2024-11-20 09:27:34.821473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:34.933637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:34.933727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:39.982 [2024-11-20 09:27:34.933752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:34.933765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.020233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.020310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.982 [2024-11-20 09:27:35.020336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.020349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.020499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.020518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.982 [2024-11-20 09:27:35.020535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.020557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.020636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.020679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.982 [2024-11-20 09:27:35.020698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.020711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.020873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.020893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.982 [2024-11-20 09:27:35.020910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.020921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.020984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.021002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:39.982 [2024-11-20 09:27:35.021017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.021029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.021084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.021100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.982 [2024-11-20 09:27:35.021115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.021127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.021195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.982 [2024-11-20 09:27:35.021212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.982 [2024-11-20 09:27:35.021228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.982 [2024-11-20 09:27:35.021240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.982 [2024-11-20 09:27:35.021415] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 477.027 ms, result 0 00:30:39.982 true 00:30:39.982 09:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81222 00:30:39.982 09:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81222 00:30:39.982 09:27:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:30:40.238 [2024-11-20 09:27:35.165041] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:40.238 [2024-11-20 09:27:35.165450] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82158 ] 00:30:40.495 [2024-11-20 09:27:35.385721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.496 [2024-11-20 09:27:35.514981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.870  [2024-11-20T09:27:37.923Z] Copying: 164/1024 [MB] (164 MBps) [2024-11-20T09:27:38.857Z] Copying: 329/1024 [MB] (165 MBps) [2024-11-20T09:27:40.230Z] Copying: 489/1024 [MB] (159 MBps) [2024-11-20T09:27:40.864Z] Copying: 648/1024 [MB] (159 MBps) [2024-11-20T09:27:41.848Z] Copying: 807/1024 [MB] (158 MBps) [2024-11-20T09:27:42.414Z] Copying: 972/1024 [MB] (165 MBps) [2024-11-20T09:27:43.348Z] Copying: 1024/1024 [MB] (average 162 MBps) 00:30:48.228 00:30:48.228 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81222 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:30:48.228 09:27:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:48.228 [2024-11-20 09:27:43.321623] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:48.228 [2024-11-20 09:27:43.321814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82239 ] 00:30:48.486 [2024-11-20 09:27:43.495610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.743 [2024-11-20 09:27:43.625789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.002 [2024-11-20 09:27:43.986794] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:49.002 [2024-11-20 09:27:43.987044] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:49.002 [2024-11-20 09:27:44.055228] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:49.002 [2024-11-20 09:27:44.055707] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:49.002 [2024-11-20 09:27:44.055975] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:49.260 [2024-11-20 09:27:44.325740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.260 [2024-11-20 09:27:44.326037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:49.260 [2024-11-20 09:27:44.326068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:49.260 [2024-11-20 09:27:44.326082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.260 [2024-11-20 09:27:44.326188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.260 [2024-11-20 09:27:44.326209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:49.260 [2024-11-20 09:27:44.326223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:49.260 [2024-11-20 09:27:44.326250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.260 [2024-11-20 09:27:44.326301] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:49.260 [2024-11-20 09:27:44.327333] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:49.260 [2024-11-20 09:27:44.327368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.260 [2024-11-20 09:27:44.327383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:49.260 [2024-11-20 09:27:44.327397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:30:49.260 [2024-11-20 09:27:44.327408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.260 [2024-11-20 09:27:44.329355] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:49.260 [2024-11-20 09:27:44.346598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.260 [2024-11-20 09:27:44.346693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:49.260 [2024-11-20 09:27:44.346717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.242 ms 00:30:49.260 [2024-11-20 09:27:44.346730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.346865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.346886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:49.261 [2024-11-20 09:27:44.346900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:30:49.261 [2024-11-20 09:27:44.346913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.356919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.357014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:49.261 [2024-11-20 09:27:44.357036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.858 ms 00:30:49.261 [2024-11-20 09:27:44.357049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.357178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.357198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:49.261 [2024-11-20 09:27:44.357212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:30:49.261 [2024-11-20 09:27:44.357225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.357347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.357373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:49.261 [2024-11-20 09:27:44.357393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:49.261 [2024-11-20 09:27:44.357406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.357447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:49.261 [2024-11-20 09:27:44.362593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.362639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:49.261 [2024-11-20 09:27:44.362677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.157 ms 00:30:49.261 [2024-11-20 09:27:44.362691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.362742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.362759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:49.261 [2024-11-20 09:27:44.362773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:49.261 [2024-11-20 09:27:44.362785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.362838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:49.261 [2024-11-20 09:27:44.362880] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:49.261 [2024-11-20 09:27:44.362931] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:49.261 [2024-11-20 09:27:44.362960] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:49.261 [2024-11-20 09:27:44.363072] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:49.261 [2024-11-20 09:27:44.363089] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:49.261 [2024-11-20 09:27:44.363105] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:49.261 [2024-11-20 09:27:44.363120] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363139] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363153] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:49.261 [2024-11-20 09:27:44.363165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:49.261 [2024-11-20 09:27:44.363177] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:49.261 [2024-11-20 09:27:44.363189] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:49.261 [2024-11-20 09:27:44.363202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.363214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:49.261 [2024-11-20 09:27:44.363233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:30:49.261 [2024-11-20 09:27:44.363253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.363358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.261 [2024-11-20 09:27:44.363380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:49.261 [2024-11-20 09:27:44.363393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:30:49.261 [2024-11-20 09:27:44.363405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.261 [2024-11-20 09:27:44.363528] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:49.261 [2024-11-20 09:27:44.363548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:49.261 [2024-11-20 09:27:44.363562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:49.261 [2024-11-20 09:27:44.363597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:49.261 [2024-11-20 09:27:44.363632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.261 [2024-11-20 09:27:44.363674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:49.261 [2024-11-20 09:27:44.363699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:49.261 [2024-11-20 09:27:44.363710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.261 [2024-11-20 09:27:44.363722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:49.261 [2024-11-20 09:27:44.363733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:49.261 [2024-11-20 09:27:44.363743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:49.261 [2024-11-20 09:27:44.363766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:49.261 [2024-11-20 09:27:44.363802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:49.261 [2024-11-20 09:27:44.363835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:49.261 [2024-11-20 09:27:44.363867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:49.261 [2024-11-20 09:27:44.363901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.261 [2024-11-20 09:27:44.363923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:49.261 [2024-11-20 09:27:44.363934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:49.261 [2024-11-20 09:27:44.363945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.261 [2024-11-20 09:27:44.363956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:49.261 [2024-11-20 09:27:44.363967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:49.261 [2024-11-20 09:27:44.363978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.261 [2024-11-20 09:27:44.363989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:49.261 [2024-11-20 09:27:44.364000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:49.261 [2024-11-20 09:27:44.364011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.262 [2024-11-20 09:27:44.364022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:49.262 [2024-11-20 09:27:44.364033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:49.262 [2024-11-20 09:27:44.364045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.262 [2024-11-20 09:27:44.364056] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:49.262 [2024-11-20 09:27:44.364069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:49.262 [2024-11-20 09:27:44.364081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.262 [2024-11-20 09:27:44.364098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.262 [2024-11-20 09:27:44.364111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:49.262 [2024-11-20 09:27:44.364122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:49.262 [2024-11-20 09:27:44.364133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:49.262 [2024-11-20 09:27:44.364144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:49.262 [2024-11-20 09:27:44.364156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:49.262 [2024-11-20 09:27:44.364168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:49.262 [2024-11-20 09:27:44.364181] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:49.262 [2024-11-20 09:27:44.364197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:49.262 [2024-11-20 09:27:44.364223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:49.262 [2024-11-20 09:27:44.364235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:49.262 [2024-11-20 09:27:44.364247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:49.262 [2024-11-20 09:27:44.364258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:49.262 [2024-11-20 09:27:44.364271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:49.262 [2024-11-20 09:27:44.364283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:49.262 [2024-11-20 09:27:44.364295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:49.262 [2024-11-20 09:27:44.364307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:49.262 [2024-11-20 09:27:44.364319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:49.262 [2024-11-20 09:27:44.364387] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:49.262 [2024-11-20 09:27:44.364401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:49.262 [2024-11-20 09:27:44.364428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:49.262 [2024-11-20 09:27:44.364440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:49.262 [2024-11-20 09:27:44.364452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:49.262 [2024-11-20 09:27:44.364465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.262 [2024-11-20 09:27:44.364477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:49.262 [2024-11-20 09:27:44.364490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:30:49.262 [2024-11-20 09:27:44.364503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.519 [2024-11-20 09:27:44.405462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.519 [2024-11-20 09:27:44.405787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:49.519 [2024-11-20 09:27:44.405907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.889 ms 00:30:49.519 [2024-11-20 09:27:44.406033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.519 [2024-11-20 09:27:44.406249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.519 [2024-11-20 09:27:44.406311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:49.519 [2024-11-20 09:27:44.406412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:30:49.519 [2024-11-20 09:27:44.406459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.519 [2024-11-20 09:27:44.462090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.519 [2024-11-20 09:27:44.462347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:49.519 [2024-11-20 09:27:44.462468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.378 ms 00:30:49.519 [2024-11-20 09:27:44.462531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.519 [2024-11-20 09:27:44.462718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.519 [2024-11-20 09:27:44.462811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:49.519 [2024-11-20 09:27:44.462914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:49.519 [2024-11-20 09:27:44.462961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.519 [2024-11-20 09:27:44.463776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.519 [2024-11-20 09:27:44.463907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:49.519 [2024-11-20 09:27:44.464015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:30:49.519 [2024-11-20 09:27:44.464063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.519 [2024-11-20 09:27:44.464388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.519 [2024-11-20 09:27:44.464508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:49.519 [2024-11-20 09:27:44.464613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:30:49.520 [2024-11-20 09:27:44.464680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.520 [2024-11-20 09:27:44.485273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.520 [2024-11-20 09:27:44.485510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:49.520 [2024-11-20 09:27:44.485551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.546 ms 00:30:49.520 [2024-11-20 09:27:44.485564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.520 [2024-11-20 09:27:44.502486] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:49.520 [2024-11-20 09:27:44.502548] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:49.520 [2024-11-20 09:27:44.502569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.520 [2024-11-20 09:27:44.502583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:49.520 [2024-11-20 09:27:44.502599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:30:49.520 [2024-11-20 09:27:44.502611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.520 [2024-11-20 09:27:44.532127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.520 [2024-11-20 09:27:44.532420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:49.520 [2024-11-20 09:27:44.532471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.429 ms 00:30:49.520 [2024-11-20 09:27:44.532484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.520 [2024-11-20 09:27:44.549276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.520 [2024-11-20 09:27:44.549343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:49.520 [2024-11-20 09:27:44.549362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.705 ms 00:30:49.520 [2024-11-20 09:27:44.549375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.520 [2024-11-20 09:27:44.564913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.520 [2024-11-20 09:27:44.564984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:49.520 [2024-11-20 09:27:44.565004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.479 ms 00:30:49.520 [2024-11-20 09:27:44.565016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.520 [2024-11-20 09:27:44.566036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.520 [2024-11-20 09:27:44.566067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:49.520 [2024-11-20 09:27:44.566083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:30:49.520 [2024-11-20 09:27:44.566094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.644800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.644883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:49.778 [2024-11-20 09:27:44.644906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.679 ms 00:30:49.778 [2024-11-20 09:27:44.644920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.659917] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:49.778 [2024-11-20 09:27:44.664220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.664261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:49.778 [2024-11-20 09:27:44.664281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.199 ms 00:30:49.778 [2024-11-20 09:27:44.664294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.664449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.664472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:49.778 [2024-11-20 09:27:44.664486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:49.778 [2024-11-20 09:27:44.664498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.664603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.664624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:49.778 [2024-11-20 09:27:44.664644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:49.778 [2024-11-20 09:27:44.664681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.664719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.664742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:49.778 [2024-11-20 09:27:44.664761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:49.778 [2024-11-20 09:27:44.664773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.664818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:49.778 [2024-11-20 09:27:44.664836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.664848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:49.778 [2024-11-20 09:27:44.664861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:49.778 [2024-11-20 09:27:44.664872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.696771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.696850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:49.778 [2024-11-20 09:27:44.696878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.860 ms 00:30:49.778 [2024-11-20 09:27:44.696891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.697016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.778 [2024-11-20 09:27:44.697036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:49.778 [2024-11-20 09:27:44.697054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:49.778 [2024-11-20 09:27:44.697066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.778 [2024-11-20 09:27:44.698593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.278 ms, result 0 00:30:50.712  [2024-11-20T09:27:46.767Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T09:27:48.142Z] Copying: 53/1024 [MB] (25 MBps) [2024-11-20T09:27:49.077Z] Copying: 77/1024 [MB] (24 MBps) [2024-11-20T09:27:50.011Z] Copying: 103/1024 [MB] (26 MBps) [2024-11-20T09:27:50.946Z] Copying: 127/1024 [MB] (23 MBps) [2024-11-20T09:27:51.879Z] Copying: 153/1024 [MB] (25 MBps) [2024-11-20T09:27:52.813Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-20T09:27:53.748Z] Copying: 205/1024 [MB] (26 MBps) [2024-11-20T09:27:55.121Z] Copying: 230/1024 [MB] (25 MBps) [2024-11-20T09:27:56.054Z] Copying: 256/1024 [MB] (25 MBps) [2024-11-20T09:27:56.986Z] Copying: 281/1024 [MB] (25 MBps) [2024-11-20T09:27:57.917Z] Copying: 307/1024 [MB] (26 MBps) [2024-11-20T09:27:58.882Z] Copying: 332/1024 [MB] (24 MBps) [2024-11-20T09:27:59.814Z] Copying: 355/1024 [MB] (23 MBps) [2024-11-20T09:28:00.744Z] Copying: 377/1024 [MB] (22 MBps) [2024-11-20T09:28:02.115Z] Copying: 401/1024 [MB] (23 MBps) [2024-11-20T09:28:03.047Z] Copying: 425/1024 [MB] (23 MBps) [2024-11-20T09:28:03.980Z] Copying: 449/1024 [MB] (23 MBps) [2024-11-20T09:28:04.910Z] Copying: 474/1024 [MB] (24 MBps) [2024-11-20T09:28:05.837Z] Copying: 500/1024 [MB] (26 MBps) [2024-11-20T09:28:06.771Z] Copying: 526/1024 [MB] (26 MBps) [2024-11-20T09:28:08.144Z] Copying: 551/1024 [MB] (24 MBps) [2024-11-20T09:28:08.730Z] Copying: 576/1024 [MB] (25 MBps) [2024-11-20T09:28:10.107Z] Copying: 603/1024 [MB] (26 MBps) [2024-11-20T09:28:11.067Z] Copying: 630/1024 [MB] (26 MBps) [2024-11-20T09:28:12.001Z] Copying: 656/1024 [MB] (26 MBps) [2024-11-20T09:28:12.934Z] Copying: 682/1024 [MB] (26 MBps) [2024-11-20T09:28:13.866Z] Copying: 709/1024 [MB] (26 MBps) [2024-11-20T09:28:14.799Z] Copying: 735/1024 [MB] (25 MBps) [2024-11-20T09:28:15.736Z] Copying: 759/1024 [MB] (24 MBps) [2024-11-20T09:28:17.109Z] Copying: 785/1024 [MB] (26 MBps) [2024-11-20T09:28:18.042Z] Copying: 811/1024 [MB] (25 MBps) [2024-11-20T09:28:18.976Z] Copying: 837/1024 [MB] (26 MBps) [2024-11-20T09:28:19.911Z] Copying: 864/1024 [MB] (26 MBps) [2024-11-20T09:28:20.846Z] Copying: 890/1024 [MB] (26 MBps) [2024-11-20T09:28:21.778Z] Copying: 915/1024 [MB] (25 MBps) [2024-11-20T09:28:23.163Z] Copying: 941/1024 [MB] (26 MBps) [2024-11-20T09:28:23.727Z] Copying: 968/1024 [MB] (26 MBps) [2024-11-20T09:28:25.102Z] Copying: 995/1024 [MB] (26 MBps) [2024-11-20T09:28:26.038Z] Copying: 1019/1024 [MB] (24 MBps) [2024-11-20T09:28:26.296Z] Copying: 1048312/1048576 [kB] (3892 kBps) [2024-11-20T09:28:26.296Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 09:28:26.055083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.055157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:31.176 [2024-11-20 09:28:26.055180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:31.176 [2024-11-20 09:28:26.055206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.057444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:31.176 [2024-11-20 09:28:26.065210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.065269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:31.176 [2024-11-20 09:28:26.065289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.711 ms 00:31:31.176 [2024-11-20 09:28:26.065302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.078308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.078383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:31.176 [2024-11-20 09:28:26.078411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.767 ms 00:31:31.176 [2024-11-20 09:28:26.078431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.101617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.101698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:31.176 [2024-11-20 09:28:26.101719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.153 ms 00:31:31.176 [2024-11-20 09:28:26.101755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.108403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.108452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:31.176 [2024-11-20 09:28:26.108468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.581 ms 00:31:31.176 [2024-11-20 09:28:26.108480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.142146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.142238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:31.176 [2024-11-20 09:28:26.142264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.612 ms 00:31:31.176 [2024-11-20 09:28:26.142277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.162139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.162211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:31.176 [2024-11-20 09:28:26.162243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.775 ms 00:31:31.176 [2024-11-20 09:28:26.162268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.176 [2024-11-20 09:28:26.273504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.176 [2024-11-20 09:28:26.273617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:31.176 [2024-11-20 09:28:26.273641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.171 ms 00:31:31.176 [2024-11-20 09:28:26.273695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.436 [2024-11-20 09:28:26.306315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.436 [2024-11-20 09:28:26.306391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:31.436 [2024-11-20 09:28:26.306411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.593 ms 00:31:31.436 [2024-11-20 09:28:26.306423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.436 [2024-11-20 09:28:26.338207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.436 [2024-11-20 09:28:26.338311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:31.436 [2024-11-20 09:28:26.338335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.730 ms 00:31:31.436 [2024-11-20 09:28:26.338347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.436 [2024-11-20 09:28:26.368344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.436 [2024-11-20 09:28:26.368412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:31.436 [2024-11-20 09:28:26.368447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.938 ms 00:31:31.436 [2024-11-20 09:28:26.368458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.436 [2024-11-20 09:28:26.398407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.436 [2024-11-20 09:28:26.398453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:31.436 [2024-11-20 09:28:26.398470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.853 ms 00:31:31.436 [2024-11-20 09:28:26.398483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.436 [2024-11-20 09:28:26.398527] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:31.436 [2024-11-20 09:28:26.398552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128256 / 261120 wr_cnt: 1 state: open 00:31:31.436 [2024-11-20 09:28:26.398567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.398995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.399008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.399019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.399031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.399043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:31.436 [2024-11-20 09:28:26.399056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:31.437 [2024-11-20 09:28:26.399813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:31.437 [2024-11-20 09:28:26.399824] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88b217de-e499-40b3-9776-0cde79366cd3 00:31:31.437 [2024-11-20 09:28:26.399836] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128256 00:31:31.437 [2024-11-20 09:28:26.399853] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129216 00:31:31.437 [2024-11-20 09:28:26.399878] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128256 00:31:31.437 [2024-11-20 09:28:26.399891] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:31:31.437 [2024-11-20 09:28:26.399902] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:31.437 [2024-11-20 09:28:26.399914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:31.437 [2024-11-20 09:28:26.399926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:31.437 [2024-11-20 09:28:26.399937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:31.437 [2024-11-20 09:28:26.399948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:31.437 [2024-11-20 09:28:26.399959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.437 [2024-11-20 09:28:26.399971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:31.437 [2024-11-20 09:28:26.399983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.434 ms 00:31:31.437 [2024-11-20 09:28:26.399994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.437 [2024-11-20 09:28:26.416958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.437 [2024-11-20 09:28:26.417009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:31.437 [2024-11-20 09:28:26.417027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.898 ms 00:31:31.437 [2024-11-20 09:28:26.417040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.437 [2024-11-20 09:28:26.417532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.437 [2024-11-20 09:28:26.417560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:31.437 [2024-11-20 09:28:26.417575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:31:31.437 [2024-11-20 09:28:26.417586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.437 [2024-11-20 09:28:26.463132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.437 [2024-11-20 09:28:26.463213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:31.437 [2024-11-20 09:28:26.463233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.437 [2024-11-20 09:28:26.463247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.437 [2024-11-20 09:28:26.463344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.437 [2024-11-20 09:28:26.463360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:31.437 [2024-11-20 09:28:26.463386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.437 [2024-11-20 09:28:26.463398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.437 [2024-11-20 09:28:26.463509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.437 [2024-11-20 09:28:26.463531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:31.437 [2024-11-20 09:28:26.463544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.437 [2024-11-20 09:28:26.463556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.437 [2024-11-20 09:28:26.463580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.437 [2024-11-20 09:28:26.463594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:31.438 [2024-11-20 09:28:26.463607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.438 [2024-11-20 09:28:26.463619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.577048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.577115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:31.696 [2024-11-20 09:28:26.577134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.577147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.663980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:31.696 [2024-11-20 09:28:26.664085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.664259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:31.696 [2024-11-20 09:28:26.664303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.664363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:31.696 [2024-11-20 09:28:26.664393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.664537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:31.696 [2024-11-20 09:28:26.664588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.664680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:31.696 [2024-11-20 09:28:26.664713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.664775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:31.696 [2024-11-20 09:28:26.664810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.664876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:31.696 [2024-11-20 09:28:26.664893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:31.696 [2024-11-20 09:28:26.664906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:31.696 [2024-11-20 09:28:26.664917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.696 [2024-11-20 09:28:26.665099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 611.065 ms, result 0 00:31:33.084 00:31:33.084 00:31:33.084 09:28:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:31:35.614 09:28:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:35.614 [2024-11-20 09:28:30.490301] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:31:35.614 [2024-11-20 09:28:30.490487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82694 ] 00:31:35.614 [2024-11-20 09:28:30.681864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.872 [2024-11-20 09:28:30.832742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.438 [2024-11-20 09:28:31.259902] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.438 [2024-11-20 09:28:31.260007] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:36.438 [2024-11-20 09:28:31.427587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.427673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:36.438 [2024-11-20 09:28:31.427703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.438 [2024-11-20 09:28:31.427715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.427780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.427799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:36.438 [2024-11-20 09:28:31.427816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:36.438 [2024-11-20 09:28:31.427827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.427857] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:36.438 [2024-11-20 09:28:31.428753] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:36.438 [2024-11-20 09:28:31.428789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.428803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:36.438 [2024-11-20 09:28:31.428815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:31:36.438 [2024-11-20 09:28:31.428827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.430739] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:36.438 [2024-11-20 09:28:31.447756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.447805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:36.438 [2024-11-20 09:28:31.447824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.018 ms 00:31:36.438 [2024-11-20 09:28:31.447836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.447913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.447931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:36.438 [2024-11-20 09:28:31.447945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:36.438 [2024-11-20 09:28:31.447956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.456566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.456635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:36.438 [2024-11-20 09:28:31.456662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.515 ms 00:31:36.438 [2024-11-20 09:28:31.456676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.456785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.456804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:36.438 [2024-11-20 09:28:31.456816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:36.438 [2024-11-20 09:28:31.456828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.456890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.456908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:36.438 [2024-11-20 09:28:31.456920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:36.438 [2024-11-20 09:28:31.456932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.456969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:36.438 [2024-11-20 09:28:31.461989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.462040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:36.438 [2024-11-20 09:28:31.462056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:31:36.438 [2024-11-20 09:28:31.462072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.462112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.462126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:36.438 [2024-11-20 09:28:31.462139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:36.438 [2024-11-20 09:28:31.462150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.462220] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:36.438 [2024-11-20 09:28:31.462264] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:36.438 [2024-11-20 09:28:31.462308] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:36.438 [2024-11-20 09:28:31.462332] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:36.438 [2024-11-20 09:28:31.462443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:36.438 [2024-11-20 09:28:31.462459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:36.438 [2024-11-20 09:28:31.462474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:36.438 [2024-11-20 09:28:31.462488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:36.438 [2024-11-20 09:28:31.462502] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:36.438 [2024-11-20 09:28:31.462514] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:36.438 [2024-11-20 09:28:31.462526] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:36.438 [2024-11-20 09:28:31.462537] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:36.438 [2024-11-20 09:28:31.462548] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:36.438 [2024-11-20 09:28:31.462564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.462576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:36.438 [2024-11-20 09:28:31.462587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:31:36.438 [2024-11-20 09:28:31.462598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.462715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.438 [2024-11-20 09:28:31.462733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:36.438 [2024-11-20 09:28:31.462746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:31:36.438 [2024-11-20 09:28:31.462757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.438 [2024-11-20 09:28:31.462878] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:36.438 [2024-11-20 09:28:31.462901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:36.438 [2024-11-20 09:28:31.462914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.438 [2024-11-20 09:28:31.462926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.462938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:36.439 [2024-11-20 09:28:31.462948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.462958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:36.439 [2024-11-20 09:28:31.462968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:36.439 [2024-11-20 09:28:31.462981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:36.439 [2024-11-20 09:28:31.462991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.439 [2024-11-20 09:28:31.463001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:36.439 [2024-11-20 09:28:31.463011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:36.439 [2024-11-20 09:28:31.463020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:36.439 [2024-11-20 09:28:31.463030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:36.439 [2024-11-20 09:28:31.463041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:36.439 [2024-11-20 09:28:31.463063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:36.439 [2024-11-20 09:28:31.463086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:36.439 [2024-11-20 09:28:31.463117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:36.439 [2024-11-20 09:28:31.463148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:36.439 [2024-11-20 09:28:31.463178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:36.439 [2024-11-20 09:28:31.463210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:36.439 [2024-11-20 09:28:31.463240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.439 [2024-11-20 09:28:31.463260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:36.439 [2024-11-20 09:28:31.463271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:36.439 [2024-11-20 09:28:31.463280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:36.439 [2024-11-20 09:28:31.463291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:36.439 [2024-11-20 09:28:31.463301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:36.439 [2024-11-20 09:28:31.463312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:36.439 [2024-11-20 09:28:31.463333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:36.439 [2024-11-20 09:28:31.463343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463353] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:36.439 [2024-11-20 09:28:31.463365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:36.439 [2024-11-20 09:28:31.463376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:36.439 [2024-11-20 09:28:31.463400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:36.439 [2024-11-20 09:28:31.463411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:36.439 [2024-11-20 09:28:31.463422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:36.439 [2024-11-20 09:28:31.463433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:36.439 [2024-11-20 09:28:31.463443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:36.439 [2024-11-20 09:28:31.463453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:36.439 [2024-11-20 09:28:31.463465] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:36.439 [2024-11-20 09:28:31.463479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:36.439 [2024-11-20 09:28:31.463503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:36.439 [2024-11-20 09:28:31.463514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:36.439 [2024-11-20 09:28:31.463526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:36.439 [2024-11-20 09:28:31.463537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:36.439 [2024-11-20 09:28:31.463548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:36.439 [2024-11-20 09:28:31.463558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:36.439 [2024-11-20 09:28:31.463570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:36.439 [2024-11-20 09:28:31.463581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:36.439 [2024-11-20 09:28:31.463591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:36.439 [2024-11-20 09:28:31.463660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:36.439 [2024-11-20 09:28:31.463681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:36.439 [2024-11-20 09:28:31.463705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:36.439 [2024-11-20 09:28:31.463717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:36.439 [2024-11-20 09:28:31.463728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:36.439 [2024-11-20 09:28:31.463741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.439 [2024-11-20 09:28:31.463753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:36.439 [2024-11-20 09:28:31.463764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:31:36.439 [2024-11-20 09:28:31.463790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.439 [2024-11-20 09:28:31.503793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.439 [2024-11-20 09:28:31.503860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:36.439 [2024-11-20 09:28:31.503897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.935 ms 00:31:36.439 [2024-11-20 09:28:31.503910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.439 [2024-11-20 09:28:31.504037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.439 [2024-11-20 09:28:31.504052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:36.439 [2024-11-20 09:28:31.504064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:36.439 [2024-11-20 09:28:31.504075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.574617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.574706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:36.699 [2024-11-20 09:28:31.574739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.427 ms 00:31:36.699 [2024-11-20 09:28:31.574755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.574854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.574873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:36.699 [2024-11-20 09:28:31.574890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:36.699 [2024-11-20 09:28:31.574911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.575627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.575684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:36.699 [2024-11-20 09:28:31.575704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:31:36.699 [2024-11-20 09:28:31.575718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.575929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.575953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:36.699 [2024-11-20 09:28:31.575969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:31:36.699 [2024-11-20 09:28:31.575991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.600018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.600087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:36.699 [2024-11-20 09:28:31.600115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.991 ms 00:31:36.699 [2024-11-20 09:28:31.600129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.621013] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:36.699 [2024-11-20 09:28:31.621093] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:36.699 [2024-11-20 09:28:31.621117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.621132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:36.699 [2024-11-20 09:28:31.621149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.801 ms 00:31:36.699 [2024-11-20 09:28:31.621163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.658037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.658115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:36.699 [2024-11-20 09:28:31.658138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.813 ms 00:31:36.699 [2024-11-20 09:28:31.658153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.677908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.677996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:36.699 [2024-11-20 09:28:31.678026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.662 ms 00:31:36.699 [2024-11-20 09:28:31.678040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.697133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.697229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:36.699 [2024-11-20 09:28:31.697250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.038 ms 00:31:36.699 [2024-11-20 09:28:31.697264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.698412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.698456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:36.699 [2024-11-20 09:28:31.698474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:31:36.699 [2024-11-20 09:28:31.698494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.783342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.783449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:36.699 [2024-11-20 09:28:31.783480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.816 ms 00:31:36.699 [2024-11-20 09:28:31.783493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.796886] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:36.699 [2024-11-20 09:28:31.800610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.800660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:36.699 [2024-11-20 09:28:31.800679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.035 ms 00:31:36.699 [2024-11-20 09:28:31.800692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.800826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.800845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:36.699 [2024-11-20 09:28:31.800859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:36.699 [2024-11-20 09:28:31.800874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.699 [2024-11-20 09:28:31.802883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.699 [2024-11-20 09:28:31.802924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:36.699 [2024-11-20 09:28:31.802939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:31:36.700 [2024-11-20 09:28:31.802950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.700 [2024-11-20 09:28:31.802988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.700 [2024-11-20 09:28:31.803004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:36.700 [2024-11-20 09:28:31.803017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:36.700 [2024-11-20 09:28:31.803029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.700 [2024-11-20 09:28:31.803074] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:36.700 [2024-11-20 09:28:31.803094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.700 [2024-11-20 09:28:31.803106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:36.700 [2024-11-20 09:28:31.803118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:36.700 [2024-11-20 09:28:31.803129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.958 [2024-11-20 09:28:31.835322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.958 [2024-11-20 09:28:31.835406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:36.958 [2024-11-20 09:28:31.835455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.167 ms 00:31:36.958 [2024-11-20 09:28:31.835475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.958 [2024-11-20 09:28:31.835566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.958 [2024-11-20 09:28:31.835584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:36.958 [2024-11-20 09:28:31.835597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:36.958 [2024-11-20 09:28:31.835608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.958 [2024-11-20 09:28:31.837061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.875 ms, result 0 00:31:38.333  [2024-11-20T09:28:34.386Z] Copying: 872/1048576 [kB] (872 kBps) [2024-11-20T09:28:35.358Z] Copying: 4376/1048576 [kB] (3504 kBps) [2024-11-20T09:28:36.293Z] Copying: 23/1024 [MB] (19 MBps) [2024-11-20T09:28:37.228Z] Copying: 51/1024 [MB] (27 MBps) [2024-11-20T09:28:38.163Z] Copying: 79/1024 [MB] (28 MBps) [2024-11-20T09:28:39.096Z] Copying: 108/1024 [MB] (28 MBps) [2024-11-20T09:28:40.481Z] Copying: 136/1024 [MB] (28 MBps) [2024-11-20T09:28:41.085Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-20T09:28:42.461Z] Copying: 192/1024 [MB] (28 MBps) [2024-11-20T09:28:43.395Z] Copying: 221/1024 [MB] (28 MBps) [2024-11-20T09:28:44.328Z] Copying: 250/1024 [MB] (29 MBps) [2024-11-20T09:28:45.262Z] Copying: 277/1024 [MB] (26 MBps) [2024-11-20T09:28:46.194Z] Copying: 306/1024 [MB] (29 MBps) [2024-11-20T09:28:47.124Z] Copying: 334/1024 [MB] (27 MBps) [2024-11-20T09:28:48.496Z] Copying: 363/1024 [MB] (28 MBps) [2024-11-20T09:28:49.426Z] Copying: 392/1024 [MB] (29 MBps) [2024-11-20T09:28:50.410Z] Copying: 421/1024 [MB] (28 MBps) [2024-11-20T09:28:51.343Z] Copying: 450/1024 [MB] (28 MBps) [2024-11-20T09:28:52.275Z] Copying: 476/1024 [MB] (26 MBps) [2024-11-20T09:28:53.207Z] Copying: 505/1024 [MB] (28 MBps) [2024-11-20T09:28:54.140Z] Copying: 535/1024 [MB] (29 MBps) [2024-11-20T09:28:55.072Z] Copying: 564/1024 [MB] (29 MBps) [2024-11-20T09:28:56.444Z] Copying: 593/1024 [MB] (29 MBps) [2024-11-20T09:28:57.375Z] Copying: 623/1024 [MB] (29 MBps) [2024-11-20T09:28:58.309Z] Copying: 652/1024 [MB] (28 MBps) [2024-11-20T09:28:59.290Z] Copying: 682/1024 [MB] (30 MBps) [2024-11-20T09:29:00.222Z] Copying: 712/1024 [MB] (29 MBps) [2024-11-20T09:29:01.158Z] Copying: 742/1024 [MB] (30 MBps) [2024-11-20T09:29:02.091Z] Copying: 773/1024 [MB] (30 MBps) [2024-11-20T09:29:03.079Z] Copying: 802/1024 [MB] (29 MBps) [2024-11-20T09:29:04.453Z] Copying: 832/1024 [MB] (29 MBps) [2024-11-20T09:29:05.387Z] Copying: 862/1024 [MB] (30 MBps) [2024-11-20T09:29:06.321Z] Copying: 892/1024 [MB] (30 MBps) [2024-11-20T09:29:07.255Z] Copying: 923/1024 [MB] (30 MBps) [2024-11-20T09:29:08.187Z] Copying: 954/1024 [MB] (30 MBps) [2024-11-20T09:29:09.121Z] Copying: 984/1024 [MB] (30 MBps) [2024-11-20T09:29:09.696Z] Copying: 1013/1024 [MB] (28 MBps) [2024-11-20T09:29:09.696Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 09:29:09.441904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.441989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:14.576 [2024-11-20 09:29:09.442024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:14.576 [2024-11-20 09:29:09.442037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.442071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:14.576 [2024-11-20 09:29:09.445986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.446023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:14.576 [2024-11-20 09:29:09.446038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.890 ms 00:32:14.576 [2024-11-20 09:29:09.446049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.446315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.446335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:14.576 [2024-11-20 09:29:09.446354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:32:14.576 [2024-11-20 09:29:09.446366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.458584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.458699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:14.576 [2024-11-20 09:29:09.458722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.188 ms 00:32:14.576 [2024-11-20 09:29:09.458736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.465812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.465869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:14.576 [2024-11-20 09:29:09.465886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.026 ms 00:32:14.576 [2024-11-20 09:29:09.465912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.499157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.499233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:14.576 [2024-11-20 09:29:09.499254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.127 ms 00:32:14.576 [2024-11-20 09:29:09.499267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.517089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.517141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:14.576 [2024-11-20 09:29:09.517160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.772 ms 00:32:14.576 [2024-11-20 09:29:09.517172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.518872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.518918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:14.576 [2024-11-20 09:29:09.518935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:32:14.576 [2024-11-20 09:29:09.518948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.549676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.549722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:14.576 [2024-11-20 09:29:09.549739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.696 ms 00:32:14.576 [2024-11-20 09:29:09.549750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.580731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.580810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:14.576 [2024-11-20 09:29:09.580853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.931 ms 00:32:14.576 [2024-11-20 09:29:09.580866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.611362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.611426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:14.576 [2024-11-20 09:29:09.611446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.399 ms 00:32:14.576 [2024-11-20 09:29:09.611459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.641419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.576 [2024-11-20 09:29:09.641468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:14.576 [2024-11-20 09:29:09.641485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.829 ms 00:32:14.576 [2024-11-20 09:29:09.641497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.576 [2024-11-20 09:29:09.641542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:14.576 [2024-11-20 09:29:09.641566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:14.576 [2024-11-20 09:29:09.641580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:14.576 [2024-11-20 09:29:09.641594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:14.576 [2024-11-20 09:29:09.641788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.641987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:14.577 [2024-11-20 09:29:09.642837] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:14.577 [2024-11-20 09:29:09.642849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88b217de-e499-40b3-9776-0cde79366cd3 00:32:14.577 [2024-11-20 09:29:09.642861] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:14.577 [2024-11-20 09:29:09.642873] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136384 00:32:14.577 [2024-11-20 09:29:09.642884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134400 00:32:14.577 [2024-11-20 09:29:09.642902] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:32:14.577 [2024-11-20 09:29:09.642913] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:14.577 [2024-11-20 09:29:09.642924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:14.578 [2024-11-20 09:29:09.642935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:14.578 [2024-11-20 09:29:09.642958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:14.578 [2024-11-20 09:29:09.642969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:14.578 [2024-11-20 09:29:09.642980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.578 [2024-11-20 09:29:09.642991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:14.578 [2024-11-20 09:29:09.643003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:32:14.578 [2024-11-20 09:29:09.643015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.578 [2024-11-20 09:29:09.659868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.578 [2024-11-20 09:29:09.659918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:14.578 [2024-11-20 09:29:09.659935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.812 ms 00:32:14.578 [2024-11-20 09:29:09.659947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.578 [2024-11-20 09:29:09.660414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:14.578 [2024-11-20 09:29:09.660444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:14.578 [2024-11-20 09:29:09.660458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:32:14.578 [2024-11-20 09:29:09.660470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.704832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.704910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:14.836 [2024-11-20 09:29:09.704938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.704950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.705041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.705056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:14.836 [2024-11-20 09:29:09.705068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.705081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.705181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.705208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:14.836 [2024-11-20 09:29:09.705220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.705233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.705257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.705270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:14.836 [2024-11-20 09:29:09.705282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.705293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.817368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.817454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:14.836 [2024-11-20 09:29:09.817474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.817487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.904764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.904860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:14.836 [2024-11-20 09:29:09.904884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.904898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.905065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.905085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:14.836 [2024-11-20 09:29:09.905120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.905132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.905181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.905197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:14.836 [2024-11-20 09:29:09.905210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.905221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.905366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.905387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:14.836 [2024-11-20 09:29:09.905400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.905418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.836 [2024-11-20 09:29:09.905485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.836 [2024-11-20 09:29:09.905505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:14.836 [2024-11-20 09:29:09.905518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.836 [2024-11-20 09:29:09.905530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.837 [2024-11-20 09:29:09.905581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.837 [2024-11-20 09:29:09.905599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:14.837 [2024-11-20 09:29:09.905611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.837 [2024-11-20 09:29:09.905630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.837 [2024-11-20 09:29:09.905724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.837 [2024-11-20 09:29:09.905745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:14.837 [2024-11-20 09:29:09.905757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.837 [2024-11-20 09:29:09.905769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.837 [2024-11-20 09:29:09.905938] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 463.988 ms, result 0 00:32:15.768 00:32:15.768 00:32:16.026 09:29:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:18.554 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:18.554 09:29:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:18.554 [2024-11-20 09:29:13.212627] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:32:18.554 [2024-11-20 09:29:13.212928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83103 ] 00:32:18.554 [2024-11-20 09:29:13.431535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.554 [2024-11-20 09:29:13.593808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.121 [2024-11-20 09:29:13.967497] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:19.121 [2024-11-20 09:29:13.967598] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:19.121 [2024-11-20 09:29:14.131421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.131499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:19.121 [2024-11-20 09:29:14.131528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:19.121 [2024-11-20 09:29:14.131540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.131606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.131624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:19.121 [2024-11-20 09:29:14.131643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:19.121 [2024-11-20 09:29:14.131671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.131704] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:19.121 [2024-11-20 09:29:14.132589] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:19.121 [2024-11-20 09:29:14.132628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.132672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:19.121 [2024-11-20 09:29:14.132687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:32:19.121 [2024-11-20 09:29:14.132698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.134682] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:19.121 [2024-11-20 09:29:14.151392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.151449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:19.121 [2024-11-20 09:29:14.151467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.712 ms 00:32:19.121 [2024-11-20 09:29:14.151481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.151581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.151606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:19.121 [2024-11-20 09:29:14.151620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:19.121 [2024-11-20 09:29:14.151632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.160323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.160375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:19.121 [2024-11-20 09:29:14.160391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.574 ms 00:32:19.121 [2024-11-20 09:29:14.160404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.160519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.160539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:19.121 [2024-11-20 09:29:14.160551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:32:19.121 [2024-11-20 09:29:14.160563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.160631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.160685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:19.121 [2024-11-20 09:29:14.160702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:19.121 [2024-11-20 09:29:14.160714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.160755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:19.121 [2024-11-20 09:29:14.165845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.165881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:19.121 [2024-11-20 09:29:14.165897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.101 ms 00:32:19.121 [2024-11-20 09:29:14.165913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.165962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.165979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:19.121 [2024-11-20 09:29:14.165991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:19.121 [2024-11-20 09:29:14.166003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.166071] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:19.121 [2024-11-20 09:29:14.166105] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:19.121 [2024-11-20 09:29:14.166148] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:19.121 [2024-11-20 09:29:14.166173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:19.121 [2024-11-20 09:29:14.166298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:19.121 [2024-11-20 09:29:14.166317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:19.121 [2024-11-20 09:29:14.166332] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:19.121 [2024-11-20 09:29:14.166348] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:19.121 [2024-11-20 09:29:14.166361] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:19.121 [2024-11-20 09:29:14.166374] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:19.121 [2024-11-20 09:29:14.166386] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:19.121 [2024-11-20 09:29:14.166397] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:19.121 [2024-11-20 09:29:14.166409] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:19.121 [2024-11-20 09:29:14.166427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.166438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:19.121 [2024-11-20 09:29:14.166450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:32:19.121 [2024-11-20 09:29:14.166461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.166560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.121 [2024-11-20 09:29:14.166576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:19.121 [2024-11-20 09:29:14.166600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:19.121 [2024-11-20 09:29:14.166611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.121 [2024-11-20 09:29:14.166747] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:19.122 [2024-11-20 09:29:14.166774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:19.122 [2024-11-20 09:29:14.166787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:19.122 [2024-11-20 09:29:14.166799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.166811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:19.122 [2024-11-20 09:29:14.166821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.166832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:19.122 [2024-11-20 09:29:14.166842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:19.122 [2024-11-20 09:29:14.166854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:19.122 [2024-11-20 09:29:14.166864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:19.122 [2024-11-20 09:29:14.166875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:19.122 [2024-11-20 09:29:14.166886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:19.122 [2024-11-20 09:29:14.166898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:19.122 [2024-11-20 09:29:14.166909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:19.122 [2024-11-20 09:29:14.166921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:19.122 [2024-11-20 09:29:14.166942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.166953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:19.122 [2024-11-20 09:29:14.166964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:19.122 [2024-11-20 09:29:14.166975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.166986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:19.122 [2024-11-20 09:29:14.166997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.122 [2024-11-20 09:29:14.167018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:19.122 [2024-11-20 09:29:14.167028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.122 [2024-11-20 09:29:14.167049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:19.122 [2024-11-20 09:29:14.167060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.122 [2024-11-20 09:29:14.167083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:19.122 [2024-11-20 09:29:14.167094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:19.122 [2024-11-20 09:29:14.167116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:19.122 [2024-11-20 09:29:14.167127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:19.122 [2024-11-20 09:29:14.167148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:19.122 [2024-11-20 09:29:14.167159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:19.122 [2024-11-20 09:29:14.167169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:19.122 [2024-11-20 09:29:14.167180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:19.122 [2024-11-20 09:29:14.167190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:19.122 [2024-11-20 09:29:14.167200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:19.122 [2024-11-20 09:29:14.167221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:19.122 [2024-11-20 09:29:14.167232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167243] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:19.122 [2024-11-20 09:29:14.167256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:19.122 [2024-11-20 09:29:14.167268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:19.122 [2024-11-20 09:29:14.167280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:19.122 [2024-11-20 09:29:14.167292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:19.122 [2024-11-20 09:29:14.167303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:19.122 [2024-11-20 09:29:14.167314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:19.122 [2024-11-20 09:29:14.167326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:19.122 [2024-11-20 09:29:14.167336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:19.122 [2024-11-20 09:29:14.167347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:19.122 [2024-11-20 09:29:14.167360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:19.122 [2024-11-20 09:29:14.167374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:19.122 [2024-11-20 09:29:14.167398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:19.122 [2024-11-20 09:29:14.167409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:19.122 [2024-11-20 09:29:14.167421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:19.122 [2024-11-20 09:29:14.167432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:19.122 [2024-11-20 09:29:14.167444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:19.122 [2024-11-20 09:29:14.167455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:19.122 [2024-11-20 09:29:14.167466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:19.122 [2024-11-20 09:29:14.167477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:19.122 [2024-11-20 09:29:14.167489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:19.122 [2024-11-20 09:29:14.167548] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:19.122 [2024-11-20 09:29:14.167566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:19.122 [2024-11-20 09:29:14.167591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:19.122 [2024-11-20 09:29:14.167602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:19.122 [2024-11-20 09:29:14.167614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:19.122 [2024-11-20 09:29:14.167627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.122 [2024-11-20 09:29:14.167639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:19.122 [2024-11-20 09:29:14.167667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:32:19.122 [2024-11-20 09:29:14.167680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.122 [2024-11-20 09:29:14.207569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.122 [2024-11-20 09:29:14.207625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:19.122 [2024-11-20 09:29:14.207660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.821 ms 00:32:19.122 [2024-11-20 09:29:14.207676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.122 [2024-11-20 09:29:14.207804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.122 [2024-11-20 09:29:14.207821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:19.122 [2024-11-20 09:29:14.207835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:19.122 [2024-11-20 09:29:14.207846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.381 [2024-11-20 09:29:14.269617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.381 [2024-11-20 09:29:14.269692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:19.381 [2024-11-20 09:29:14.269713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.667 ms 00:32:19.381 [2024-11-20 09:29:14.269726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.381 [2024-11-20 09:29:14.269805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.381 [2024-11-20 09:29:14.269823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:19.381 [2024-11-20 09:29:14.269837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:19.381 [2024-11-20 09:29:14.269855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.381 [2024-11-20 09:29:14.270542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.381 [2024-11-20 09:29:14.270572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:19.381 [2024-11-20 09:29:14.270587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:32:19.381 [2024-11-20 09:29:14.270598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.381 [2024-11-20 09:29:14.270797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.381 [2024-11-20 09:29:14.270818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:19.381 [2024-11-20 09:29:14.270832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:32:19.381 [2024-11-20 09:29:14.270852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.381 [2024-11-20 09:29:14.289980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.381 [2024-11-20 09:29:14.290033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:19.381 [2024-11-20 09:29:14.290056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.097 ms 00:32:19.381 [2024-11-20 09:29:14.290069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.381 [2024-11-20 09:29:14.306853] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:19.381 [2024-11-20 09:29:14.306905] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:19.381 [2024-11-20 09:29:14.306925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.381 [2024-11-20 09:29:14.306937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:19.381 [2024-11-20 09:29:14.306952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.693 ms 00:32:19.382 [2024-11-20 09:29:14.306964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.336110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.336191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:19.382 [2024-11-20 09:29:14.336211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.087 ms 00:32:19.382 [2024-11-20 09:29:14.336224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.351915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.351971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:19.382 [2024-11-20 09:29:14.351989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.630 ms 00:32:19.382 [2024-11-20 09:29:14.352001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.367092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.367148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:19.382 [2024-11-20 09:29:14.367166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.028 ms 00:32:19.382 [2024-11-20 09:29:14.367178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.368152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.368186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:19.382 [2024-11-20 09:29:14.368202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:32:19.382 [2024-11-20 09:29:14.368219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.446614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.446703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:19.382 [2024-11-20 09:29:14.446732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.357 ms 00:32:19.382 [2024-11-20 09:29:14.446745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.460171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:19.382 [2024-11-20 09:29:14.464480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.464521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:19.382 [2024-11-20 09:29:14.464541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.647 ms 00:32:19.382 [2024-11-20 09:29:14.464553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.464702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.464724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:19.382 [2024-11-20 09:29:14.464738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:19.382 [2024-11-20 09:29:14.464753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.465784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.465819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:19.382 [2024-11-20 09:29:14.465835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:32:19.382 [2024-11-20 09:29:14.465847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.465884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.465900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:19.382 [2024-11-20 09:29:14.465913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:19.382 [2024-11-20 09:29:14.465925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.466011] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:19.382 [2024-11-20 09:29:14.466037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.466049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:19.382 [2024-11-20 09:29:14.466062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:19.382 [2024-11-20 09:29:14.466073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.497192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.497240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:19.382 [2024-11-20 09:29:14.497267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.089 ms 00:32:19.382 [2024-11-20 09:29:14.497279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.497370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:19.382 [2024-11-20 09:29:14.497388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:19.382 [2024-11-20 09:29:14.497401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:32:19.382 [2024-11-20 09:29:14.497413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:19.382 [2024-11-20 09:29:14.498856] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.832 ms, result 0 00:32:20.754  [2024-11-20T09:29:16.808Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T09:29:17.742Z] Copying: 50/1024 [MB] (25 MBps) [2024-11-20T09:29:19.115Z] Copying: 77/1024 [MB] (26 MBps) [2024-11-20T09:29:20.048Z] Copying: 103/1024 [MB] (26 MBps) [2024-11-20T09:29:20.982Z] Copying: 130/1024 [MB] (26 MBps) [2024-11-20T09:29:21.915Z] Copying: 156/1024 [MB] (26 MBps) [2024-11-20T09:29:22.848Z] Copying: 181/1024 [MB] (25 MBps) [2024-11-20T09:29:23.814Z] Copying: 207/1024 [MB] (25 MBps) [2024-11-20T09:29:24.747Z] Copying: 232/1024 [MB] (25 MBps) [2024-11-20T09:29:26.116Z] Copying: 258/1024 [MB] (26 MBps) [2024-11-20T09:29:27.047Z] Copying: 284/1024 [MB] (25 MBps) [2024-11-20T09:29:28.051Z] Copying: 309/1024 [MB] (25 MBps) [2024-11-20T09:29:28.983Z] Copying: 335/1024 [MB] (25 MBps) [2024-11-20T09:29:29.919Z] Copying: 360/1024 [MB] (25 MBps) [2024-11-20T09:29:30.852Z] Copying: 385/1024 [MB] (25 MBps) [2024-11-20T09:29:31.829Z] Copying: 410/1024 [MB] (25 MBps) [2024-11-20T09:29:32.761Z] Copying: 435/1024 [MB] (24 MBps) [2024-11-20T09:29:34.134Z] Copying: 461/1024 [MB] (25 MBps) [2024-11-20T09:29:35.066Z] Copying: 486/1024 [MB] (25 MBps) [2024-11-20T09:29:35.999Z] Copying: 512/1024 [MB] (26 MBps) [2024-11-20T09:29:36.972Z] Copying: 536/1024 [MB] (23 MBps) [2024-11-20T09:29:37.906Z] Copying: 559/1024 [MB] (23 MBps) [2024-11-20T09:29:38.838Z] Copying: 585/1024 [MB] (25 MBps) [2024-11-20T09:29:39.775Z] Copying: 610/1024 [MB] (25 MBps) [2024-11-20T09:29:41.148Z] Copying: 636/1024 [MB] (25 MBps) [2024-11-20T09:29:41.715Z] Copying: 661/1024 [MB] (24 MBps) [2024-11-20T09:29:43.089Z] Copying: 686/1024 [MB] (25 MBps) [2024-11-20T09:29:44.023Z] Copying: 711/1024 [MB] (25 MBps) [2024-11-20T09:29:44.958Z] Copying: 737/1024 [MB] (25 MBps) [2024-11-20T09:29:45.937Z] Copying: 759/1024 [MB] (22 MBps) [2024-11-20T09:29:46.918Z] Copying: 781/1024 [MB] (22 MBps) [2024-11-20T09:29:47.854Z] Copying: 803/1024 [MB] (21 MBps) [2024-11-20T09:29:48.786Z] Copying: 825/1024 [MB] (21 MBps) [2024-11-20T09:29:49.718Z] Copying: 847/1024 [MB] (22 MBps) [2024-11-20T09:29:51.093Z] Copying: 871/1024 [MB] (24 MBps) [2024-11-20T09:29:52.027Z] Copying: 896/1024 [MB] (24 MBps) [2024-11-20T09:29:52.961Z] Copying: 920/1024 [MB] (24 MBps) [2024-11-20T09:29:53.894Z] Copying: 945/1024 [MB] (25 MBps) [2024-11-20T09:29:54.828Z] Copying: 969/1024 [MB] (24 MBps) [2024-11-20T09:29:55.772Z] Copying: 994/1024 [MB] (24 MBps) [2024-11-20T09:29:56.030Z] Copying: 1018/1024 [MB] (24 MBps) [2024-11-20T09:29:56.030Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 09:29:55.975087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.910 [2024-11-20 09:29:55.975420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:00.910 [2024-11-20 09:29:55.975569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:00.910 [2024-11-20 09:29:55.975886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.910 [2024-11-20 09:29:55.975941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:00.910 [2024-11-20 09:29:55.979727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.910 [2024-11-20 09:29:55.979766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:00.910 [2024-11-20 09:29:55.979792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.759 ms 00:33:00.910 [2024-11-20 09:29:55.979804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.910 [2024-11-20 09:29:55.980057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.910 [2024-11-20 09:29:55.980084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:00.910 [2024-11-20 09:29:55.980098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:33:00.910 [2024-11-20 09:29:55.980110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.910 [2024-11-20 09:29:55.983586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.910 [2024-11-20 09:29:55.983619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:00.910 [2024-11-20 09:29:55.983634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:33:00.910 [2024-11-20 09:29:55.983663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.910 [2024-11-20 09:29:55.990272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.910 [2024-11-20 09:29:55.990307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:00.910 [2024-11-20 09:29:55.990321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.585 ms 00:33:00.910 [2024-11-20 09:29:55.990332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:00.910 [2024-11-20 09:29:56.022021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:00.910 [2024-11-20 09:29:56.022082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:00.910 [2024-11-20 09:29:56.022116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.621 ms 00:33:00.910 [2024-11-20 09:29:56.022128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.169 [2024-11-20 09:29:56.039468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.169 [2024-11-20 09:29:56.039533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:01.169 [2024-11-20 09:29:56.039551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.294 ms 00:33:01.169 [2024-11-20 09:29:56.039564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.169 [2024-11-20 09:29:56.041372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.170 [2024-11-20 09:29:56.041415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:01.170 [2024-11-20 09:29:56.041432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:33:01.170 [2024-11-20 09:29:56.041445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.170 [2024-11-20 09:29:56.074292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.170 [2024-11-20 09:29:56.074352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:01.170 [2024-11-20 09:29:56.074372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.822 ms 00:33:01.170 [2024-11-20 09:29:56.074384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.170 [2024-11-20 09:29:56.107492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.170 [2024-11-20 09:29:56.107621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:01.170 [2024-11-20 09:29:56.107670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.047 ms 00:33:01.170 [2024-11-20 09:29:56.107687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.170 [2024-11-20 09:29:56.140439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.170 [2024-11-20 09:29:56.140522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:01.170 [2024-11-20 09:29:56.140542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.659 ms 00:33:01.170 [2024-11-20 09:29:56.140556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.170 [2024-11-20 09:29:56.171090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.170 [2024-11-20 09:29:56.171154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:01.170 [2024-11-20 09:29:56.171171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.367 ms 00:33:01.170 [2024-11-20 09:29:56.171183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.170 [2024-11-20 09:29:56.171229] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:01.170 [2024-11-20 09:29:56.171263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:01.170 [2024-11-20 09:29:56.171284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:33:01.170 [2024-11-20 09:29:56.171298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.171992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:01.170 [2024-11-20 09:29:56.172146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:01.171 [2024-11-20 09:29:56.172575] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:01.171 [2024-11-20 09:29:56.172587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88b217de-e499-40b3-9776-0cde79366cd3 00:33:01.171 [2024-11-20 09:29:56.172599] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:33:01.171 [2024-11-20 09:29:56.172611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:01.171 [2024-11-20 09:29:56.172622] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:01.171 [2024-11-20 09:29:56.172634] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:01.171 [2024-11-20 09:29:56.172654] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:01.171 [2024-11-20 09:29:56.172668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:01.171 [2024-11-20 09:29:56.172694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:01.171 [2024-11-20 09:29:56.172704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:01.171 [2024-11-20 09:29:56.172715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:01.171 [2024-11-20 09:29:56.172726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.171 [2024-11-20 09:29:56.172738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:01.171 [2024-11-20 09:29:56.172751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:33:01.171 [2024-11-20 09:29:56.172767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.171 [2024-11-20 09:29:56.190454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.171 [2024-11-20 09:29:56.190507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:01.171 [2024-11-20 09:29:56.190525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.643 ms 00:33:01.171 [2024-11-20 09:29:56.190538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.171 [2024-11-20 09:29:56.191057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:01.171 [2024-11-20 09:29:56.191099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:01.171 [2024-11-20 09:29:56.191114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:33:01.171 [2024-11-20 09:29:56.191126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.171 [2024-11-20 09:29:56.235550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.171 [2024-11-20 09:29:56.235624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:01.171 [2024-11-20 09:29:56.235643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.171 [2024-11-20 09:29:56.235678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.171 [2024-11-20 09:29:56.235782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.171 [2024-11-20 09:29:56.235807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:01.171 [2024-11-20 09:29:56.235821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.171 [2024-11-20 09:29:56.235832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.171 [2024-11-20 09:29:56.235962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.171 [2024-11-20 09:29:56.235982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:01.171 [2024-11-20 09:29:56.235994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.171 [2024-11-20 09:29:56.236006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.171 [2024-11-20 09:29:56.236030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.171 [2024-11-20 09:29:56.236044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:01.171 [2024-11-20 09:29:56.236063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.171 [2024-11-20 09:29:56.236081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.345562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.345693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:01.430 [2024-11-20 09:29:56.345714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.345734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.428984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:01.430 [2024-11-20 09:29:56.429134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.429259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:01.430 [2024-11-20 09:29:56.429290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.429349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:01.430 [2024-11-20 09:29:56.429376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.429539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:01.430 [2024-11-20 09:29:56.429575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.429635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:01.430 [2024-11-20 09:29:56.429665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.429770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:01.430 [2024-11-20 09:29:56.429803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.429869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:01.430 [2024-11-20 09:29:56.429887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:01.430 [2024-11-20 09:29:56.429900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:01.430 [2024-11-20 09:29:56.429918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:01.430 [2024-11-20 09:29:56.430066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.946 ms, result 0 00:33:02.364 00:33:02.364 00:33:02.364 09:29:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:33:04.897 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81222 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81222 ']' 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81222 00:33:04.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81222) - No such process 00:33:04.897 Process with pid 81222 is not found 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81222 is not found' 00:33:04.897 09:29:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:33:05.154 Remove shared memory files 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:05.154 00:33:05.154 real 3m55.637s 00:33:05.154 user 4m32.099s 00:33:05.154 sys 0m39.493s 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.154 ************************************ 00:33:05.154 END TEST ftl_dirty_shutdown 00:33:05.154 09:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:05.154 ************************************ 00:33:05.413 09:30:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:33:05.413 09:30:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:05.413 09:30:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.413 09:30:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:05.413 ************************************ 00:33:05.413 START TEST ftl_upgrade_shutdown 00:33:05.413 ************************************ 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:33:05.413 * Looking for test storage... 00:33:05.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.413 --rc genhtml_branch_coverage=1 00:33:05.413 --rc genhtml_function_coverage=1 00:33:05.413 --rc genhtml_legend=1 00:33:05.413 --rc geninfo_all_blocks=1 00:33:05.413 --rc geninfo_unexecuted_blocks=1 00:33:05.413 00:33:05.413 ' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.413 --rc genhtml_branch_coverage=1 00:33:05.413 --rc genhtml_function_coverage=1 00:33:05.413 --rc genhtml_legend=1 00:33:05.413 --rc geninfo_all_blocks=1 00:33:05.413 --rc geninfo_unexecuted_blocks=1 00:33:05.413 00:33:05.413 ' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.413 --rc genhtml_branch_coverage=1 00:33:05.413 --rc genhtml_function_coverage=1 00:33:05.413 --rc genhtml_legend=1 00:33:05.413 --rc geninfo_all_blocks=1 00:33:05.413 --rc geninfo_unexecuted_blocks=1 00:33:05.413 00:33:05.413 ' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:05.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.413 --rc genhtml_branch_coverage=1 00:33:05.413 --rc genhtml_function_coverage=1 00:33:05.413 --rc genhtml_legend=1 00:33:05.413 --rc geninfo_all_blocks=1 00:33:05.413 --rc geninfo_unexecuted_blocks=1 00:33:05.413 00:33:05.413 ' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:33:05.413 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83634 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83634 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83634 ']' 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.414 09:30:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:05.686 [2024-11-20 09:30:00.618437] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:05.686 [2024-11-20 09:30:00.618630] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83634 ] 00:33:05.943 [2024-11-20 09:30:00.810473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.943 [2024-11-20 09:30:00.970975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:33:06.874 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:33:06.875 09:30:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:07.131 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:33:07.388 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:07.388 { 00:33:07.388 "name": "basen1", 00:33:07.388 "aliases": [ 00:33:07.388 "4cb50528-f3e2-4047-960a-2f5ba67928db" 00:33:07.388 ], 00:33:07.388 "product_name": "NVMe disk", 00:33:07.388 "block_size": 4096, 00:33:07.388 "num_blocks": 1310720, 00:33:07.388 "uuid": "4cb50528-f3e2-4047-960a-2f5ba67928db", 00:33:07.388 "numa_id": -1, 00:33:07.388 "assigned_rate_limits": { 00:33:07.388 "rw_ios_per_sec": 0, 00:33:07.388 "rw_mbytes_per_sec": 0, 00:33:07.388 "r_mbytes_per_sec": 0, 00:33:07.388 "w_mbytes_per_sec": 0 00:33:07.388 }, 00:33:07.388 "claimed": true, 00:33:07.388 "claim_type": "read_many_write_one", 00:33:07.388 "zoned": false, 00:33:07.388 "supported_io_types": { 00:33:07.388 "read": true, 00:33:07.388 "write": true, 00:33:07.388 "unmap": true, 00:33:07.388 "flush": true, 00:33:07.388 "reset": true, 00:33:07.388 "nvme_admin": true, 00:33:07.388 "nvme_io": true, 00:33:07.388 "nvme_io_md": false, 00:33:07.388 "write_zeroes": true, 00:33:07.388 "zcopy": false, 00:33:07.388 "get_zone_info": false, 00:33:07.388 "zone_management": false, 00:33:07.388 "zone_append": false, 00:33:07.388 "compare": true, 00:33:07.388 "compare_and_write": false, 00:33:07.388 "abort": true, 00:33:07.388 "seek_hole": false, 00:33:07.388 "seek_data": false, 00:33:07.388 "copy": true, 00:33:07.388 "nvme_iov_md": false 00:33:07.388 }, 00:33:07.388 "driver_specific": { 00:33:07.388 "nvme": [ 00:33:07.388 { 00:33:07.388 "pci_address": "0000:00:11.0", 00:33:07.388 "trid": { 00:33:07.388 "trtype": "PCIe", 00:33:07.388 "traddr": "0000:00:11.0" 00:33:07.388 }, 00:33:07.388 "ctrlr_data": { 00:33:07.388 "cntlid": 0, 00:33:07.388 "vendor_id": "0x1b36", 00:33:07.388 "model_number": "QEMU NVMe Ctrl", 00:33:07.388 "serial_number": "12341", 00:33:07.388 "firmware_revision": "8.0.0", 00:33:07.388 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:07.388 "oacs": { 00:33:07.388 "security": 0, 00:33:07.388 "format": 1, 00:33:07.388 "firmware": 0, 00:33:07.388 "ns_manage": 1 00:33:07.388 }, 00:33:07.388 "multi_ctrlr": false, 00:33:07.388 "ana_reporting": false 00:33:07.388 }, 00:33:07.388 "vs": { 00:33:07.388 "nvme_version": "1.4" 00:33:07.389 }, 00:33:07.389 "ns_data": { 00:33:07.389 "id": 1, 00:33:07.389 "can_share": false 00:33:07.389 } 00:33:07.389 } 00:33:07.389 ], 00:33:07.389 "mp_policy": "active_passive" 00:33:07.389 } 00:33:07.389 } 00:33:07.389 ]' 00:33:07.389 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:07.648 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:07.907 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=8416b239-9e3d-4710-acc7-263f107fe342 00:33:07.907 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:33:07.907 09:30:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8416b239-9e3d-4710-acc7-263f107fe342 00:33:08.165 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:33:08.423 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=8e3a5fa2-79e4-4af2-8701-f1dcc6f2bbad 00:33:08.423 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 8e3a5fa2-79e4-4af2-8701-f1dcc6f2bbad 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 ]] 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 5120 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:08.681 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 00:33:08.940 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:08.940 { 00:33:08.940 "name": "5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0", 00:33:08.940 "aliases": [ 00:33:08.940 "lvs/basen1p0" 00:33:08.940 ], 00:33:08.940 "product_name": "Logical Volume", 00:33:08.940 "block_size": 4096, 00:33:08.940 "num_blocks": 5242880, 00:33:08.940 "uuid": "5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0", 00:33:08.940 "assigned_rate_limits": { 00:33:08.940 "rw_ios_per_sec": 0, 00:33:08.940 "rw_mbytes_per_sec": 0, 00:33:08.940 "r_mbytes_per_sec": 0, 00:33:08.940 "w_mbytes_per_sec": 0 00:33:08.940 }, 00:33:08.940 "claimed": false, 00:33:08.940 "zoned": false, 00:33:08.940 "supported_io_types": { 00:33:08.940 "read": true, 00:33:08.940 "write": true, 00:33:08.940 "unmap": true, 00:33:08.940 "flush": false, 00:33:08.940 "reset": true, 00:33:08.940 "nvme_admin": false, 00:33:08.940 "nvme_io": false, 00:33:08.940 "nvme_io_md": false, 00:33:08.940 "write_zeroes": true, 00:33:08.940 "zcopy": false, 00:33:08.940 "get_zone_info": false, 00:33:08.940 "zone_management": false, 00:33:08.940 "zone_append": false, 00:33:08.940 "compare": false, 00:33:08.940 "compare_and_write": false, 00:33:08.940 "abort": false, 00:33:08.940 "seek_hole": true, 00:33:08.940 "seek_data": true, 00:33:08.940 "copy": false, 00:33:08.940 "nvme_iov_md": false 00:33:08.940 }, 00:33:08.940 "driver_specific": { 00:33:08.940 "lvol": { 00:33:08.940 "lvol_store_uuid": "8e3a5fa2-79e4-4af2-8701-f1dcc6f2bbad", 00:33:08.940 "base_bdev": "basen1", 00:33:08.940 "thin_provision": true, 00:33:08.940 "num_allocated_clusters": 0, 00:33:08.940 "snapshot": false, 00:33:08.940 "clone": false, 00:33:08.940 "esnap_clone": false 00:33:08.940 } 00:33:08.940 } 00:33:08.940 } 00:33:08.940 ]' 00:33:08.940 09:30:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:08.940 09:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:08.940 09:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:09.198 09:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:33:09.198 09:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:33:09.198 09:30:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:33:09.198 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:33:09.198 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:09.198 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:33:09.457 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:33:09.457 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:33:09.457 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:33:09.746 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:33:09.746 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:33:09.746 09:30:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5f5e024f-f7ed-4f9d-8551-e89ef7f9b4d0 -c cachen1p0 --l2p_dram_limit 2 00:33:10.012 [2024-11-20 09:30:05.106413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.106635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:10.012 [2024-11-20 09:30:05.106691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:10.012 [2024-11-20 09:30:05.106715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.106810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.106829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:10.012 [2024-11-20 09:30:05.106845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:33:10.012 [2024-11-20 09:30:05.106858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.106894] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:10.012 [2024-11-20 09:30:05.107938] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:10.012 [2024-11-20 09:30:05.107972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.107985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:10.012 [2024-11-20 09:30:05.108004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:33:10.012 [2024-11-20 09:30:05.108016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.108161] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 7eda8db2-5b88-4c93-905a-930b03859a93 00:33:10.012 [2024-11-20 09:30:05.110034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.110079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:33:10.012 [2024-11-20 09:30:05.110097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:33:10.012 [2024-11-20 09:30:05.110112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.119939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.119997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:10.012 [2024-11-20 09:30:05.120018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.744 ms 00:33:10.012 [2024-11-20 09:30:05.120034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.120105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.120128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:10.012 [2024-11-20 09:30:05.120142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:33:10.012 [2024-11-20 09:30:05.120160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.120246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.120269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:10.012 [2024-11-20 09:30:05.120283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:10.012 [2024-11-20 09:30:05.120305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.120341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:10.012 [2024-11-20 09:30:05.125709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.125751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:10.012 [2024-11-20 09:30:05.125773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.374 ms 00:33:10.012 [2024-11-20 09:30:05.125786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.125831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.125847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:10.012 [2024-11-20 09:30:05.125863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:10.012 [2024-11-20 09:30:05.125875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.125929] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:33:10.012 [2024-11-20 09:30:05.126102] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:10.012 [2024-11-20 09:30:05.126127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:10.012 [2024-11-20 09:30:05.126144] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:10.012 [2024-11-20 09:30:05.126162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:10.012 [2024-11-20 09:30:05.126176] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:10.012 [2024-11-20 09:30:05.126192] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:10.012 [2024-11-20 09:30:05.126204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:10.012 [2024-11-20 09:30:05.126221] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:10.012 [2024-11-20 09:30:05.126233] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:10.012 [2024-11-20 09:30:05.126265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.126277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:10.012 [2024-11-20 09:30:05.126293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:33:10.012 [2024-11-20 09:30:05.126305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.012 [2024-11-20 09:30:05.126405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.012 [2024-11-20 09:30:05.126421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:10.013 [2024-11-20 09:30:05.126438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:33:10.013 [2024-11-20 09:30:05.126463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.013 [2024-11-20 09:30:05.126593] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:10.013 [2024-11-20 09:30:05.126612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:10.013 [2024-11-20 09:30:05.126630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:10.013 [2024-11-20 09:30:05.126643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.126684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:10.013 [2024-11-20 09:30:05.126696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.126711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:10.013 [2024-11-20 09:30:05.126722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:10.013 [2024-11-20 09:30:05.126736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:10.013 [2024-11-20 09:30:05.126747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.126760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:10.013 [2024-11-20 09:30:05.126772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:10.013 [2024-11-20 09:30:05.126786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.126797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:10.013 [2024-11-20 09:30:05.126810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:10.013 [2024-11-20 09:30:05.126821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.126837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:10.013 [2024-11-20 09:30:05.126849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:10.013 [2024-11-20 09:30:05.126864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.126877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:10.013 [2024-11-20 09:30:05.126892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:10.013 [2024-11-20 09:30:05.126903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:10.013 [2024-11-20 09:30:05.126917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:10.013 [2024-11-20 09:30:05.126929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:10.013 [2024-11-20 09:30:05.126943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:10.013 [2024-11-20 09:30:05.126954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:10.013 [2024-11-20 09:30:05.126968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:10.013 [2024-11-20 09:30:05.126980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:10.013 [2024-11-20 09:30:05.126994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:10.013 [2024-11-20 09:30:05.127005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:10.013 [2024-11-20 09:30:05.127018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:10.013 [2024-11-20 09:30:05.127030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:10.013 [2024-11-20 09:30:05.127046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:10.013 [2024-11-20 09:30:05.127058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.127072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:10.013 [2024-11-20 09:30:05.127083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:10.013 [2024-11-20 09:30:05.127097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.127108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:10.013 [2024-11-20 09:30:05.127122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:10.013 [2024-11-20 09:30:05.127133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.127147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:10.013 [2024-11-20 09:30:05.127159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:10.013 [2024-11-20 09:30:05.127173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.127184] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:10.013 [2024-11-20 09:30:05.127200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:10.013 [2024-11-20 09:30:05.127213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:10.013 [2024-11-20 09:30:05.127230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:10.013 [2024-11-20 09:30:05.127243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:10.013 [2024-11-20 09:30:05.127260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:10.013 [2024-11-20 09:30:05.127272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:10.013 [2024-11-20 09:30:05.127286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:10.013 [2024-11-20 09:30:05.127298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:10.013 [2024-11-20 09:30:05.127313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:10.013 [2024-11-20 09:30:05.127330] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:10.013 [2024-11-20 09:30:05.127348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:10.013 [2024-11-20 09:30:05.127380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:10.013 [2024-11-20 09:30:05.127420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:10.013 [2024-11-20 09:30:05.127435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:10.013 [2024-11-20 09:30:05.127448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:10.013 [2024-11-20 09:30:05.127463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:10.013 [2024-11-20 09:30:05.127563] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:10.013 [2024-11-20 09:30:05.127580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:10.013 [2024-11-20 09:30:05.127608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:10.013 [2024-11-20 09:30:05.127620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:10.013 [2024-11-20 09:30:05.127636] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:10.013 [2024-11-20 09:30:05.127665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:10.013 [2024-11-20 09:30:05.127684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:10.013 [2024-11-20 09:30:05.127698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.145 ms 00:33:10.013 [2024-11-20 09:30:05.127713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:10.013 [2024-11-20 09:30:05.127790] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:10.013 [2024-11-20 09:30:05.127815] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:13.297 [2024-11-20 09:30:08.101259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.101343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:13.297 [2024-11-20 09:30:08.101366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2973.486 ms 00:33:13.297 [2024-11-20 09:30:08.101383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.139698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.139771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:13.297 [2024-11-20 09:30:08.139794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.008 ms 00:33:13.297 [2024-11-20 09:30:08.139810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.139945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.139970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:13.297 [2024-11-20 09:30:08.139985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:13.297 [2024-11-20 09:30:08.140003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.185356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.185426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:13.297 [2024-11-20 09:30:08.185447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.291 ms 00:33:13.297 [2024-11-20 09:30:08.185463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.185528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.185554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:13.297 [2024-11-20 09:30:08.185569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:13.297 [2024-11-20 09:30:08.185583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.186282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.186463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:13.297 [2024-11-20 09:30:08.186489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:33:13.297 [2024-11-20 09:30:08.186505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.186577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.186597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:13.297 [2024-11-20 09:30:08.186613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:33:13.297 [2024-11-20 09:30:08.186630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.207910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.207971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:13.297 [2024-11-20 09:30:08.207990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.233 ms 00:33:13.297 [2024-11-20 09:30:08.208006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.224054] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:13.297 [2024-11-20 09:30:08.225591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.225763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:13.297 [2024-11-20 09:30:08.225801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.458 ms 00:33:13.297 [2024-11-20 09:30:08.225815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.270643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.270747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:33:13.297 [2024-11-20 09:30:08.270774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.754 ms 00:33:13.297 [2024-11-20 09:30:08.270787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.270941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.270966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:13.297 [2024-11-20 09:30:08.270988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:33:13.297 [2024-11-20 09:30:08.271001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.302528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.302598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:33:13.297 [2024-11-20 09:30:08.302622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.421 ms 00:33:13.297 [2024-11-20 09:30:08.302636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.333796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.333842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:33:13.297 [2024-11-20 09:30:08.333863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.081 ms 00:33:13.297 [2024-11-20 09:30:08.333876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.297 [2024-11-20 09:30:08.334782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.297 [2024-11-20 09:30:08.334816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:13.297 [2024-11-20 09:30:08.334836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:33:13.297 [2024-11-20 09:30:08.334849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.449023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.556 [2024-11-20 09:30:08.449388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:33:13.556 [2024-11-20 09:30:08.449433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 114.081 ms 00:33:13.556 [2024-11-20 09:30:08.449449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.484328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.556 [2024-11-20 09:30:08.484814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:33:13.556 [2024-11-20 09:30:08.484878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.677 ms 00:33:13.556 [2024-11-20 09:30:08.484894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.519226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.556 [2024-11-20 09:30:08.519312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:33:13.556 [2024-11-20 09:30:08.519338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.186 ms 00:33:13.556 [2024-11-20 09:30:08.519352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.550054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.556 [2024-11-20 09:30:08.550098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:13.556 [2024-11-20 09:30:08.550119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.640 ms 00:33:13.556 [2024-11-20 09:30:08.550132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.550193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.556 [2024-11-20 09:30:08.550211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:13.556 [2024-11-20 09:30:08.550231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:13.556 [2024-11-20 09:30:08.550254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.550388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.556 [2024-11-20 09:30:08.550408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:13.556 [2024-11-20 09:30:08.550428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:33:13.556 [2024-11-20 09:30:08.550440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.556 [2024-11-20 09:30:08.551756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3444.809 ms, result 0 00:33:13.556 { 00:33:13.556 "name": "ftl", 00:33:13.556 "uuid": "7eda8db2-5b88-4c93-905a-930b03859a93" 00:33:13.556 } 00:33:13.556 09:30:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:33:13.814 [2024-11-20 09:30:08.823135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.814 09:30:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:33:14.072 09:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:33:14.333 [2024-11-20 09:30:09.423671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:14.333 09:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:33:14.902 [2024-11-20 09:30:09.738703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:14.902 09:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:15.161 Fill FTL, iteration 1 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83762 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83762 /var/tmp/spdk.tgt.sock 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83762 ']' 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:33:15.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.161 09:30:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:15.161 [2024-11-20 09:30:10.245304] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:15.161 [2024-11-20 09:30:10.245803] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83762 ] 00:33:15.420 [2024-11-20 09:30:10.431720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.678 [2024-11-20 09:30:10.585451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.612 09:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:16.612 09:30:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:16.612 09:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:33:16.870 ftln1 00:33:16.870 09:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:33:16.870 09:30:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83762 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83762 ']' 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83762 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83762 00:33:17.130 killing process with pid 83762 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83762' 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83762 00:33:17.130 09:30:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83762 00:33:19.659 09:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:33:19.659 09:30:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:33:19.659 [2024-11-20 09:30:14.508228] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:19.659 [2024-11-20 09:30:14.508402] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83815 ] 00:33:19.659 [2024-11-20 09:30:14.692367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.918 [2024-11-20 09:30:14.856338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.290  [2024-11-20T09:30:17.343Z] Copying: 219/1024 [MB] (219 MBps) [2024-11-20T09:30:18.726Z] Copying: 439/1024 [MB] (220 MBps) [2024-11-20T09:30:19.660Z] Copying: 660/1024 [MB] (221 MBps) [2024-11-20T09:30:20.225Z] Copying: 876/1024 [MB] (216 MBps) [2024-11-20T09:30:21.599Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:33:26.479 00:33:26.479 Calculate MD5 checksum, iteration 1 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:26.479 09:30:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:26.479 [2024-11-20 09:30:21.339400] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:26.479 [2024-11-20 09:30:21.339840] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83886 ] 00:33:26.479 [2024-11-20 09:30:21.524163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.738 [2024-11-20 09:30:21.682788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.109  [2024-11-20T09:30:24.603Z] Copying: 472/1024 [MB] (472 MBps) [2024-11-20T09:30:24.603Z] Copying: 933/1024 [MB] (461 MBps) [2024-11-20T09:30:25.537Z] Copying: 1024/1024 [MB] (average 464 MBps) 00:33:30.417 00:33:30.417 09:30:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:33:30.417 09:30:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:32.941 Fill FTL, iteration 2 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7080b9b78dd472d2ecca2dab0bd85540 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:32.941 09:30:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:33:32.941 [2024-11-20 09:30:27.699772] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:32.941 [2024-11-20 09:30:27.701007] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83953 ] 00:33:32.941 [2024-11-20 09:30:27.900028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.941 [2024-11-20 09:30:28.033590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.848  [2024-11-20T09:30:30.534Z] Copying: 204/1024 [MB] (204 MBps) [2024-11-20T09:30:31.957Z] Copying: 401/1024 [MB] (197 MBps) [2024-11-20T09:30:32.524Z] Copying: 611/1024 [MB] (210 MBps) [2024-11-20T09:30:33.463Z] Copying: 822/1024 [MB] (211 MBps) [2024-11-20T09:30:34.835Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:33:39.715 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:33:39.715 Calculate MD5 checksum, iteration 2 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:39.715 09:30:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:39.715 [2024-11-20 09:30:34.626617] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:39.715 [2024-11-20 09:30:34.626810] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84023 ] 00:33:39.715 [2024-11-20 09:30:34.811893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.973 [2024-11-20 09:30:34.948664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.874  [2024-11-20T09:30:37.941Z] Copying: 497/1024 [MB] (497 MBps) [2024-11-20T09:30:37.941Z] Copying: 958/1024 [MB] (461 MBps) [2024-11-20T09:30:39.323Z] Copying: 1024/1024 [MB] (average 475 MBps) 00:33:44.203 00:33:44.204 09:30:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:33:44.204 09:30:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:46.741 09:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:33:46.741 09:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=46a1202557923cf62d7509ffd04236c0 00:33:46.741 09:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:33:46.741 09:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:33:46.741 09:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:46.741 [2024-11-20 09:30:41.780478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.741 [2024-11-20 09:30:41.780557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:46.741 [2024-11-20 09:30:41.780580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:33:46.741 [2024-11-20 09:30:41.780594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.741 [2024-11-20 09:30:41.780641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.741 [2024-11-20 09:30:41.780692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:46.741 [2024-11-20 09:30:41.780714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:46.741 [2024-11-20 09:30:41.780735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.741 [2024-11-20 09:30:41.780774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:46.741 [2024-11-20 09:30:41.780796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:46.741 [2024-11-20 09:30:41.780810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:46.741 [2024-11-20 09:30:41.780822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:46.741 [2024-11-20 09:30:41.780907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.427 ms, result 0 00:33:46.741 true 00:33:46.741 09:30:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:46.999 { 00:33:46.999 "name": "ftl", 00:33:46.999 "properties": [ 00:33:46.999 { 00:33:46.999 "name": "superblock_version", 00:33:47.000 "value": 5, 00:33:47.000 "read-only": true 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "name": "base_device", 00:33:47.000 "bands": [ 00:33:47.000 { 00:33:47.000 "id": 0, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 1, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 2, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 3, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 4, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 5, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 6, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 7, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 8, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 9, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 10, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 11, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 12, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 13, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 14, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 15, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 16, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 17, 00:33:47.000 "state": "FREE", 00:33:47.000 "validity": 0.0 00:33:47.000 } 00:33:47.000 ], 00:33:47.000 "read-only": true 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "name": "cache_device", 00:33:47.000 "type": "bdev", 00:33:47.000 "chunks": [ 00:33:47.000 { 00:33:47.000 "id": 0, 00:33:47.000 "state": "INACTIVE", 00:33:47.000 "utilization": 0.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 1, 00:33:47.000 "state": "CLOSED", 00:33:47.000 "utilization": 1.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 2, 00:33:47.000 "state": "CLOSED", 00:33:47.000 "utilization": 1.0 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 3, 00:33:47.000 "state": "OPEN", 00:33:47.000 "utilization": 0.001953125 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "id": 4, 00:33:47.000 "state": "OPEN", 00:33:47.000 "utilization": 0.0 00:33:47.000 } 00:33:47.000 ], 00:33:47.000 "read-only": true 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "name": "verbose_mode", 00:33:47.000 "value": true, 00:33:47.000 "unit": "", 00:33:47.000 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:47.000 }, 00:33:47.000 { 00:33:47.000 "name": "prep_upgrade_on_shutdown", 00:33:47.000 "value": false, 00:33:47.000 "unit": "", 00:33:47.000 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:47.000 } 00:33:47.000 ] 00:33:47.000 } 00:33:47.259 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:33:47.259 [2024-11-20 09:30:42.364634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:47.259 [2024-11-20 09:30:42.364762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:47.259 [2024-11-20 09:30:42.364783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:47.259 [2024-11-20 09:30:42.364796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:47.259 [2024-11-20 09:30:42.364837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:47.259 [2024-11-20 09:30:42.364855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:47.259 [2024-11-20 09:30:42.364871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:47.259 [2024-11-20 09:30:42.364883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:47.259 [2024-11-20 09:30:42.364911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:47.259 [2024-11-20 09:30:42.364933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:47.259 [2024-11-20 09:30:42.364946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:47.259 [2024-11-20 09:30:42.364957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:47.259 [2024-11-20 09:30:42.365038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.390 ms, result 0 00:33:47.259 true 00:33:47.517 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:33:47.517 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:47.517 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:47.775 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:33:47.775 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:33:47.775 09:30:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:48.033 [2024-11-20 09:30:43.073471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.033 [2024-11-20 09:30:43.073551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:48.033 [2024-11-20 09:30:43.073573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:33:48.033 [2024-11-20 09:30:43.073587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.033 [2024-11-20 09:30:43.073622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.033 [2024-11-20 09:30:43.073639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:48.033 [2024-11-20 09:30:43.073674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:48.033 [2024-11-20 09:30:43.073688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.033 [2024-11-20 09:30:43.073718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:48.033 [2024-11-20 09:30:43.073734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:48.033 [2024-11-20 09:30:43.073746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:48.033 [2024-11-20 09:30:43.073759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:48.033 [2024-11-20 09:30:43.073840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.351 ms, result 0 00:33:48.033 true 00:33:48.033 09:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:48.291 { 00:33:48.291 "name": "ftl", 00:33:48.291 "properties": [ 00:33:48.291 { 00:33:48.291 "name": "superblock_version", 00:33:48.291 "value": 5, 00:33:48.291 "read-only": true 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "name": "base_device", 00:33:48.291 "bands": [ 00:33:48.291 { 00:33:48.291 "id": 0, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 1, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 2, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 3, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 4, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 5, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 6, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 7, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 8, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 9, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 10, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 11, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 12, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 13, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 14, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 15, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 16, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 17, 00:33:48.291 "state": "FREE", 00:33:48.291 "validity": 0.0 00:33:48.291 } 00:33:48.291 ], 00:33:48.291 "read-only": true 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "name": "cache_device", 00:33:48.291 "type": "bdev", 00:33:48.291 "chunks": [ 00:33:48.291 { 00:33:48.291 "id": 0, 00:33:48.291 "state": "INACTIVE", 00:33:48.291 "utilization": 0.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 1, 00:33:48.291 "state": "CLOSED", 00:33:48.291 "utilization": 1.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 2, 00:33:48.291 "state": "CLOSED", 00:33:48.291 "utilization": 1.0 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 3, 00:33:48.291 "state": "OPEN", 00:33:48.291 "utilization": 0.001953125 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "id": 4, 00:33:48.291 "state": "OPEN", 00:33:48.291 "utilization": 0.0 00:33:48.291 } 00:33:48.291 ], 00:33:48.291 "read-only": true 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "name": "verbose_mode", 00:33:48.291 "value": true, 00:33:48.291 "unit": "", 00:33:48.291 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:48.291 }, 00:33:48.291 { 00:33:48.291 "name": "prep_upgrade_on_shutdown", 00:33:48.291 "value": true, 00:33:48.291 "unit": "", 00:33:48.291 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:48.291 } 00:33:48.291 ] 00:33:48.291 } 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83634 ]] 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83634 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83634 ']' 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83634 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83634 00:33:48.291 killing process with pid 83634 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83634' 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83634 00:33:48.291 09:30:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83634 00:33:49.666 [2024-11-20 09:30:44.420683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:49.666 [2024-11-20 09:30:44.439224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.666 [2024-11-20 09:30:44.439282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:49.666 [2024-11-20 09:30:44.439329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:49.666 [2024-11-20 09:30:44.439342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:49.666 [2024-11-20 09:30:44.439375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:49.666 [2024-11-20 09:30:44.443133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:49.666 [2024-11-20 09:30:44.443169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:49.666 [2024-11-20 09:30:44.443202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.734 ms 00:33:49.666 [2024-11-20 09:30:44.443214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.128834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.128924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:59.633 [2024-11-20 09:30:53.128955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8685.633 ms 00:33:59.633 [2024-11-20 09:30:53.128976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.130382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.130412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:59.633 [2024-11-20 09:30:53.130427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.376 ms 00:33:59.633 [2024-11-20 09:30:53.130440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.131710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.131757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:59.633 [2024-11-20 09:30:53.131774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.214 ms 00:33:59.633 [2024-11-20 09:30:53.131787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.144969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.145214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:59.633 [2024-11-20 09:30:53.145244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.073 ms 00:33:59.633 [2024-11-20 09:30:53.145259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.153595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.153643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:59.633 [2024-11-20 09:30:53.153677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.269 ms 00:33:59.633 [2024-11-20 09:30:53.153700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.153835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.153857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:59.633 [2024-11-20 09:30:53.153883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.088 ms 00:33:59.633 [2024-11-20 09:30:53.153896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.166225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.166280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:59.633 [2024-11-20 09:30:53.166298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.304 ms 00:33:59.633 [2024-11-20 09:30:53.166310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.178494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.178541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:59.633 [2024-11-20 09:30:53.178557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.139 ms 00:33:59.633 [2024-11-20 09:30:53.178569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.190711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.190761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:59.633 [2024-11-20 09:30:53.190788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.096 ms 00:33:59.633 [2024-11-20 09:30:53.190800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.202773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.633 [2024-11-20 09:30:53.202817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:59.633 [2024-11-20 09:30:53.202833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.857 ms 00:33:59.633 [2024-11-20 09:30:53.202844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.633 [2024-11-20 09:30:53.202887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:59.633 [2024-11-20 09:30:53.202913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:59.633 [2024-11-20 09:30:53.202942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:59.633 [2024-11-20 09:30:53.202976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:59.633 [2024-11-20 09:30:53.202990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:59.633 [2024-11-20 09:30:53.203184] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:59.633 [2024-11-20 09:30:53.203196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 7eda8db2-5b88-4c93-905a-930b03859a93 00:33:59.634 [2024-11-20 09:30:53.203209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:59.634 [2024-11-20 09:30:53.203222] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:59.634 [2024-11-20 09:30:53.203240] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:59.634 [2024-11-20 09:30:53.203253] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:59.634 [2024-11-20 09:30:53.203271] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:59.634 [2024-11-20 09:30:53.203299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:59.634 [2024-11-20 09:30:53.203310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:59.634 [2024-11-20 09:30:53.203321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:59.634 [2024-11-20 09:30:53.203334] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:59.634 [2024-11-20 09:30:53.203346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.634 [2024-11-20 09:30:53.203364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:59.634 [2024-11-20 09:30:53.203378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.462 ms 00:33:59.634 [2024-11-20 09:30:53.203390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.220684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.634 [2024-11-20 09:30:53.220738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:59.634 [2024-11-20 09:30:53.220756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.229 ms 00:33:59.634 [2024-11-20 09:30:53.220779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.221287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:59.634 [2024-11-20 09:30:53.221324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:59.634 [2024-11-20 09:30:53.221340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.455 ms 00:33:59.634 [2024-11-20 09:30:53.221352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.278450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.278537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:59.634 [2024-11-20 09:30:53.278567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.278589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.278687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.278710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:59.634 [2024-11-20 09:30:53.278725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.278737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.278884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.278906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:59.634 [2024-11-20 09:30:53.278929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.278941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.278978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.278993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:59.634 [2024-11-20 09:30:53.279006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.279025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.390427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.390744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:59.634 [2024-11-20 09:30:53.390778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.390808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.483574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.483699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:59.634 [2024-11-20 09:30:53.483723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.483747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.483906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.483927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:59.634 [2024-11-20 09:30:53.483942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.483954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.484040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.484066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:59.634 [2024-11-20 09:30:53.484081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.484093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.484228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.484264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:59.634 [2024-11-20 09:30:53.484278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.484290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.484345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.484378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:59.634 [2024-11-20 09:30:53.484392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.484405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.484462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.484486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:59.634 [2024-11-20 09:30:53.484506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.484519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.484594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:59.634 [2024-11-20 09:30:53.484617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:59.634 [2024-11-20 09:30:53.484631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:59.634 [2024-11-20 09:30:53.484644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:59.634 [2024-11-20 09:30:53.484862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9045.629 ms, result 0 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84261 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84261 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:02.163 09:30:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84261 ']' 00:34:02.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.164 09:30:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.164 09:30:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.164 09:30:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.164 09:30:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.164 09:30:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:02.164 [2024-11-20 09:30:57.128302] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:02.164 [2024-11-20 09:30:57.128596] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84261 ] 00:34:02.421 [2024-11-20 09:30:57.322823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.421 [2024-11-20 09:30:57.456312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:03.356 [2024-11-20 09:30:58.423946] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:03.356 [2024-11-20 09:30:58.424039] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:03.615 [2024-11-20 09:30:58.573398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.615 [2024-11-20 09:30:58.573473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:03.615 [2024-11-20 09:30:58.573496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:03.615 [2024-11-20 09:30:58.573508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.615 [2024-11-20 09:30:58.573582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.615 [2024-11-20 09:30:58.573601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:03.615 [2024-11-20 09:30:58.573614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:34:03.616 [2024-11-20 09:30:58.573626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.573691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:03.616 [2024-11-20 09:30:58.574614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:03.616 [2024-11-20 09:30:58.574672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.574689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:03.616 [2024-11-20 09:30:58.574702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.996 ms 00:34:03.616 [2024-11-20 09:30:58.574714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.576752] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:03.616 [2024-11-20 09:30:58.593530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.593599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:03.616 [2024-11-20 09:30:58.593629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.779 ms 00:34:03.616 [2024-11-20 09:30:58.593641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.593758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.593779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:03.616 [2024-11-20 09:30:58.593792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:34:03.616 [2024-11-20 09:30:58.593805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.602841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.602909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:03.616 [2024-11-20 09:30:58.602928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.919 ms 00:34:03.616 [2024-11-20 09:30:58.602940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.603052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.603074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:03.616 [2024-11-20 09:30:58.603087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:34:03.616 [2024-11-20 09:30:58.603099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.603176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.603195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:03.616 [2024-11-20 09:30:58.603214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:34:03.616 [2024-11-20 09:30:58.603226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.603280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:03.616 [2024-11-20 09:30:58.608318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.608361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:03.616 [2024-11-20 09:30:58.608377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.062 ms 00:34:03.616 [2024-11-20 09:30:58.608394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.608431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.608446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:03.616 [2024-11-20 09:30:58.608459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:03.616 [2024-11-20 09:30:58.608470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.608552] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:03.616 [2024-11-20 09:30:58.608587] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:03.616 [2024-11-20 09:30:58.608636] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:03.616 [2024-11-20 09:30:58.608681] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:03.616 [2024-11-20 09:30:58.608824] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:03.616 [2024-11-20 09:30:58.608850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:03.616 [2024-11-20 09:30:58.608866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:03.616 [2024-11-20 09:30:58.608882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:03.616 [2024-11-20 09:30:58.608896] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:03.616 [2024-11-20 09:30:58.608915] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:03.616 [2024-11-20 09:30:58.608926] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:03.616 [2024-11-20 09:30:58.608937] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:03.616 [2024-11-20 09:30:58.608948] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:03.616 [2024-11-20 09:30:58.608968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.608980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:03.616 [2024-11-20 09:30:58.608993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:34:03.616 [2024-11-20 09:30:58.609004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.609112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.616 [2024-11-20 09:30:58.609129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:03.616 [2024-11-20 09:30:58.609141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:34:03.616 [2024-11-20 09:30:58.609157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.616 [2024-11-20 09:30:58.609273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:03.616 [2024-11-20 09:30:58.609290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:03.616 [2024-11-20 09:30:58.609303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:03.616 [2024-11-20 09:30:58.609314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.616 [2024-11-20 09:30:58.609326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:03.616 [2024-11-20 09:30:58.609336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:03.616 [2024-11-20 09:30:58.609347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:03.616 [2024-11-20 09:30:58.609357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:03.616 [2024-11-20 09:30:58.609369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:03.616 [2024-11-20 09:30:58.609380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.616 [2024-11-20 09:30:58.609390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:03.616 [2024-11-20 09:30:58.609401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:03.616 [2024-11-20 09:30:58.609411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.616 [2024-11-20 09:30:58.609421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:03.616 [2024-11-20 09:30:58.609433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:03.616 [2024-11-20 09:30:58.609444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.616 [2024-11-20 09:30:58.609454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:03.616 [2024-11-20 09:30:58.609464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:03.616 [2024-11-20 09:30:58.609475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.616 [2024-11-20 09:30:58.609486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:03.616 [2024-11-20 09:30:58.609496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:03.616 [2024-11-20 09:30:58.609506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:03.616 [2024-11-20 09:30:58.609517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:03.616 [2024-11-20 09:30:58.609527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:03.616 [2024-11-20 09:30:58.609538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:03.616 [2024-11-20 09:30:58.609562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:03.616 [2024-11-20 09:30:58.609574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:03.616 [2024-11-20 09:30:58.609584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:03.617 [2024-11-20 09:30:58.609595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:03.617 [2024-11-20 09:30:58.609613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:03.617 [2024-11-20 09:30:58.609624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:03.617 [2024-11-20 09:30:58.609635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:03.617 [2024-11-20 09:30:58.609857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:03.617 [2024-11-20 09:30:58.609923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.617 [2024-11-20 09:30:58.609965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:03.617 [2024-11-20 09:30:58.610002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:03.617 [2024-11-20 09:30:58.610122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.617 [2024-11-20 09:30:58.610172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:03.617 [2024-11-20 09:30:58.610209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:03.617 [2024-11-20 09:30:58.610372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.617 [2024-11-20 09:30:58.610483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:03.617 [2024-11-20 09:30:58.610533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:03.617 [2024-11-20 09:30:58.610672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.617 [2024-11-20 09:30:58.610727] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:03.617 [2024-11-20 09:30:58.610835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:03.617 [2024-11-20 09:30:58.610933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:03.617 [2024-11-20 09:30:58.610984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:03.617 [2024-11-20 09:30:58.611139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:03.617 [2024-11-20 09:30:58.611188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:03.617 [2024-11-20 09:30:58.611225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:03.617 [2024-11-20 09:30:58.611375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:03.617 [2024-11-20 09:30:58.611423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:03.617 [2024-11-20 09:30:58.611461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:03.617 [2024-11-20 09:30:58.611611] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:03.617 [2024-11-20 09:30:58.611638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:03.617 [2024-11-20 09:30:58.611681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:03.617 [2024-11-20 09:30:58.611715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:03.617 [2024-11-20 09:30:58.611727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:03.617 [2024-11-20 09:30:58.611745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:03.617 [2024-11-20 09:30:58.611758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:03.617 [2024-11-20 09:30:58.611838] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:03.617 [2024-11-20 09:30:58.611851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:03.617 [2024-11-20 09:30:58.611875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:03.617 [2024-11-20 09:30:58.611887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:03.617 [2024-11-20 09:30:58.611898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:03.617 [2024-11-20 09:30:58.611912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:03.617 [2024-11-20 09:30:58.611924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:03.617 [2024-11-20 09:30:58.611937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.706 ms 00:34:03.617 [2024-11-20 09:30:58.611949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:03.617 [2024-11-20 09:30:58.612027] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:03.617 [2024-11-20 09:30:58.612047] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:34:06.931 [2024-11-20 09:31:01.632827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.931 [2024-11-20 09:31:01.632905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:34:06.931 [2024-11-20 09:31:01.632928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3020.816 ms 00:34:06.931 [2024-11-20 09:31:01.632941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.931 [2024-11-20 09:31:01.671564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.931 [2024-11-20 09:31:01.671624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:06.931 [2024-11-20 09:31:01.671657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.343 ms 00:34:06.931 [2024-11-20 09:31:01.671672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.931 [2024-11-20 09:31:01.671816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.931 [2024-11-20 09:31:01.671844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:06.931 [2024-11-20 09:31:01.671859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:34:06.931 [2024-11-20 09:31:01.671871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.931 [2024-11-20 09:31:01.716766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.931 [2024-11-20 09:31:01.716818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:06.931 [2024-11-20 09:31:01.716836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.810 ms 00:34:06.931 [2024-11-20 09:31:01.716853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.931 [2024-11-20 09:31:01.716919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.931 [2024-11-20 09:31:01.716936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:06.931 [2024-11-20 09:31:01.716950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:06.932 [2024-11-20 09:31:01.716961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.717618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.717645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:06.932 [2024-11-20 09:31:01.717674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:34:06.932 [2024-11-20 09:31:01.717685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.717755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.717772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:06.932 [2024-11-20 09:31:01.717785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:34:06.932 [2024-11-20 09:31:01.717796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.739030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.739080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:06.932 [2024-11-20 09:31:01.739099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.203 ms 00:34:06.932 [2024-11-20 09:31:01.739111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.756052] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:34:06.932 [2024-11-20 09:31:01.756103] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:06.932 [2024-11-20 09:31:01.756123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.756136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:34:06.932 [2024-11-20 09:31:01.756151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.839 ms 00:34:06.932 [2024-11-20 09:31:01.756163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.773920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.773967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:34:06.932 [2024-11-20 09:31:01.773984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.704 ms 00:34:06.932 [2024-11-20 09:31:01.773996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.789146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.789190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:34:06.932 [2024-11-20 09:31:01.789207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.097 ms 00:34:06.932 [2024-11-20 09:31:01.789218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.804215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.804261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:34:06.932 [2024-11-20 09:31:01.804278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.947 ms 00:34:06.932 [2024-11-20 09:31:01.804289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.805247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.805289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:06.932 [2024-11-20 09:31:01.805306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.823 ms 00:34:06.932 [2024-11-20 09:31:01.805318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.923704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.923809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:06.932 [2024-11-20 09:31:01.923836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 118.355 ms 00:34:06.932 [2024-11-20 09:31:01.923852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.940289] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:06.932 [2024-11-20 09:31:01.941795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.941838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:06.932 [2024-11-20 09:31:01.941860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.847 ms 00:34:06.932 [2024-11-20 09:31:01.941875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.942062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.942090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:34:06.932 [2024-11-20 09:31:01.942107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:06.932 [2024-11-20 09:31:01.942122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.942226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.942275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:06.932 [2024-11-20 09:31:01.942293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:34:06.932 [2024-11-20 09:31:01.942309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.942354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.942378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:06.932 [2024-11-20 09:31:01.942393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:06.932 [2024-11-20 09:31:01.942414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.942467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:06.932 [2024-11-20 09:31:01.942488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.942502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:06.932 [2024-11-20 09:31:01.942516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:34:06.932 [2024-11-20 09:31:01.942529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.981640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.981744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:34:06.932 [2024-11-20 09:31:01.981769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.068 ms 00:34:06.932 [2024-11-20 09:31:01.981784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.981944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:06.932 [2024-11-20 09:31:01.981969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:06.932 [2024-11-20 09:31:01.981985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:34:06.932 [2024-11-20 09:31:01.981999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:06.932 [2024-11-20 09:31:01.983776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3409.688 ms, result 0 00:34:06.932 [2024-11-20 09:31:01.998298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.932 [2024-11-20 09:31:02.014281] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:06.932 [2024-11-20 09:31:02.025188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:07.190 09:31:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.190 09:31:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:07.190 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:07.190 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:07.190 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:34:07.448 [2024-11-20 09:31:02.313282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:07.448 [2024-11-20 09:31:02.313371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:34:07.448 [2024-11-20 09:31:02.313392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:07.448 [2024-11-20 09:31:02.313411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.448 [2024-11-20 09:31:02.313451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:07.448 [2024-11-20 09:31:02.313468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:34:07.448 [2024-11-20 09:31:02.313480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:34:07.448 [2024-11-20 09:31:02.313492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.448 [2024-11-20 09:31:02.313541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:07.448 [2024-11-20 09:31:02.313557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:34:07.448 [2024-11-20 09:31:02.313569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:07.448 [2024-11-20 09:31:02.313580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:07.448 [2024-11-20 09:31:02.313681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.370 ms, result 0 00:34:07.448 true 00:34:07.448 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:07.709 { 00:34:07.709 "name": "ftl", 00:34:07.709 "properties": [ 00:34:07.709 { 00:34:07.709 "name": "superblock_version", 00:34:07.709 "value": 5, 00:34:07.709 "read-only": true 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "name": "base_device", 00:34:07.709 "bands": [ 00:34:07.709 { 00:34:07.709 "id": 0, 00:34:07.709 "state": "CLOSED", 00:34:07.709 "validity": 1.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 1, 00:34:07.709 "state": "CLOSED", 00:34:07.709 "validity": 1.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 2, 00:34:07.709 "state": "CLOSED", 00:34:07.709 "validity": 0.007843137254901933 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 3, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 4, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 5, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 6, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 7, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 8, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 9, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 10, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 11, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 12, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 13, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 14, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 15, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 16, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 17, 00:34:07.709 "state": "FREE", 00:34:07.709 "validity": 0.0 00:34:07.709 } 00:34:07.709 ], 00:34:07.709 "read-only": true 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "name": "cache_device", 00:34:07.709 "type": "bdev", 00:34:07.709 "chunks": [ 00:34:07.709 { 00:34:07.709 "id": 0, 00:34:07.709 "state": "INACTIVE", 00:34:07.709 "utilization": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 1, 00:34:07.709 "state": "OPEN", 00:34:07.709 "utilization": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 2, 00:34:07.709 "state": "OPEN", 00:34:07.709 "utilization": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 3, 00:34:07.709 "state": "FREE", 00:34:07.709 "utilization": 0.0 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "id": 4, 00:34:07.709 "state": "FREE", 00:34:07.709 "utilization": 0.0 00:34:07.709 } 00:34:07.709 ], 00:34:07.709 "read-only": true 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "name": "verbose_mode", 00:34:07.709 "value": true, 00:34:07.709 "unit": "", 00:34:07.709 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:34:07.709 }, 00:34:07.709 { 00:34:07.709 "name": "prep_upgrade_on_shutdown", 00:34:07.709 "value": false, 00:34:07.709 "unit": "", 00:34:07.709 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:34:07.709 } 00:34:07.709 ] 00:34:07.709 } 00:34:07.709 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:34:07.709 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:34:07.709 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:07.967 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:34:07.967 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:34:07.967 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:34:07.967 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:34:07.968 09:31:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:34:08.226 Validate MD5 checksum, iteration 1 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:08.226 09:31:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:08.226 [2024-11-20 09:31:03.298323] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:08.226 [2024-11-20 09:31:03.298809] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84346 ] 00:34:08.485 [2024-11-20 09:31:03.487683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.743 [2024-11-20 09:31:03.664485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.643  [2024-11-20T09:31:06.696Z] Copying: 465/1024 [MB] (465 MBps) [2024-11-20T09:31:06.696Z] Copying: 947/1024 [MB] (482 MBps) [2024-11-20T09:31:08.133Z] Copying: 1024/1024 [MB] (average 457 MBps) 00:34:13.013 00:34:13.271 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:13.271 09:31:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:15.803 Validate MD5 checksum, iteration 2 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7080b9b78dd472d2ecca2dab0bd85540 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7080b9b78dd472d2ecca2dab0bd85540 != \7\0\8\0\b\9\b\7\8\d\d\4\7\2\d\2\e\c\c\a\2\d\a\b\0\b\d\8\5\5\4\0 ]] 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:15.803 09:31:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:15.803 [2024-11-20 09:31:10.407117] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:15.803 [2024-11-20 09:31:10.407506] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84421 ] 00:34:15.803 [2024-11-20 09:31:10.590354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.803 [2024-11-20 09:31:10.728586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:17.705  [2024-11-20T09:31:13.392Z] Copying: 491/1024 [MB] (491 MBps) [2024-11-20T09:31:13.649Z] Copying: 994/1024 [MB] (503 MBps) [2024-11-20T09:31:15.563Z] Copying: 1024/1024 [MB] (average 497 MBps) 00:34:20.443 00:34:20.443 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:20.443 09:31:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=46a1202557923cf62d7509ffd04236c0 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 46a1202557923cf62d7509ffd04236c0 != \4\6\a\1\2\0\2\5\5\7\9\2\3\c\f\6\2\d\7\5\0\9\f\f\d\0\4\2\3\6\c\0 ]] 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84261 ]] 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84261 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84494 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84494 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84494 ']' 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.968 09:31:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:22.968 [2024-11-20 09:31:18.016894] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:22.968 [2024-11-20 09:31:18.017088] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84494 ] 00:34:23.225 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84261 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:34:23.225 [2024-11-20 09:31:18.205351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:23.483 [2024-11-20 09:31:18.399780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.852 [2024-11-20 09:31:19.538180] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:24.852 [2024-11-20 09:31:19.538292] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:34:24.852 [2024-11-20 09:31:19.689302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.852 [2024-11-20 09:31:19.689397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:24.852 [2024-11-20 09:31:19.689420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:24.852 [2024-11-20 09:31:19.689435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.852 [2024-11-20 09:31:19.689526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.852 [2024-11-20 09:31:19.689547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:24.852 [2024-11-20 09:31:19.689562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:34:24.852 [2024-11-20 09:31:19.689574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.852 [2024-11-20 09:31:19.689623] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:24.852 [2024-11-20 09:31:19.690607] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:24.852 [2024-11-20 09:31:19.690679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.852 [2024-11-20 09:31:19.690697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:24.852 [2024-11-20 09:31:19.690711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.076 ms 00:34:24.852 [2024-11-20 09:31:19.690724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.852 [2024-11-20 09:31:19.691270] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:34:24.852 [2024-11-20 09:31:19.714112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.852 [2024-11-20 09:31:19.714195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:34:24.852 [2024-11-20 09:31:19.714224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.839 ms 00:34:24.852 [2024-11-20 09:31:19.714253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.852 [2024-11-20 09:31:19.727173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.852 [2024-11-20 09:31:19.727228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:34:24.852 [2024-11-20 09:31:19.727254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:34:24.852 [2024-11-20 09:31:19.727267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.727890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.727926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:24.853 [2024-11-20 09:31:19.727944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.450 ms 00:34:24.853 [2024-11-20 09:31:19.727957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.728041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.728064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:24.853 [2024-11-20 09:31:19.728079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:34:24.853 [2024-11-20 09:31:19.728091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.728137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.728153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:24.853 [2024-11-20 09:31:19.728166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:34:24.853 [2024-11-20 09:31:19.728179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.728221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:24.853 [2024-11-20 09:31:19.733633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.733722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:24.853 [2024-11-20 09:31:19.733754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.423 ms 00:34:24.853 [2024-11-20 09:31:19.733779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.733851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.733879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:24.853 [2024-11-20 09:31:19.733902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:34:24.853 [2024-11-20 09:31:19.733922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.734008] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:34:24.853 [2024-11-20 09:31:19.734073] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:34:24.853 [2024-11-20 09:31:19.734156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:34:24.853 [2024-11-20 09:31:19.734201] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:34:24.853 [2024-11-20 09:31:19.734378] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:24.853 [2024-11-20 09:31:19.734421] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:24.853 [2024-11-20 09:31:19.734449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:34:24.853 [2024-11-20 09:31:19.734477] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:24.853 [2024-11-20 09:31:19.734503] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:24.853 [2024-11-20 09:31:19.734527] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:24.853 [2024-11-20 09:31:19.734548] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:24.853 [2024-11-20 09:31:19.734569] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:24.853 [2024-11-20 09:31:19.734591] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:24.853 [2024-11-20 09:31:19.734613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.734681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:24.853 [2024-11-20 09:31:19.734709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.610 ms 00:34:24.853 [2024-11-20 09:31:19.734730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.734869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.853 [2024-11-20 09:31:19.734903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:24.853 [2024-11-20 09:31:19.734928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:34:24.853 [2024-11-20 09:31:19.734949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.853 [2024-11-20 09:31:19.735118] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:24.853 [2024-11-20 09:31:19.735168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:24.853 [2024-11-20 09:31:19.735202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:24.853 [2024-11-20 09:31:19.735226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:24.853 [2024-11-20 09:31:19.735267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:24.853 [2024-11-20 09:31:19.735312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:24.853 [2024-11-20 09:31:19.735332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:24.853 [2024-11-20 09:31:19.735351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:24.853 [2024-11-20 09:31:19.735391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:24.853 [2024-11-20 09:31:19.735409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:24.853 [2024-11-20 09:31:19.735446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:24.853 [2024-11-20 09:31:19.735465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:24.853 [2024-11-20 09:31:19.735502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:24.853 [2024-11-20 09:31:19.735521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:24.853 [2024-11-20 09:31:19.735564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:24.853 [2024-11-20 09:31:19.735583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:24.853 [2024-11-20 09:31:19.735602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:24.853 [2024-11-20 09:31:19.735670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:24.853 [2024-11-20 09:31:19.735702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:24.853 [2024-11-20 09:31:19.735723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:24.853 [2024-11-20 09:31:19.735743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:24.853 [2024-11-20 09:31:19.735764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:24.853 [2024-11-20 09:31:19.735784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:24.853 [2024-11-20 09:31:19.735804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:24.853 [2024-11-20 09:31:19.735825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:24.853 [2024-11-20 09:31:19.735844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:24.853 [2024-11-20 09:31:19.735874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:24.853 [2024-11-20 09:31:19.735895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:24.853 [2024-11-20 09:31:19.735940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:24.853 [2024-11-20 09:31:19.735961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.735981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:24.853 [2024-11-20 09:31:19.736001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:24.853 [2024-11-20 09:31:19.736022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.736040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:24.853 [2024-11-20 09:31:19.736060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:24.853 [2024-11-20 09:31:19.736080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.736100] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:24.853 [2024-11-20 09:31:19.736121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:24.853 [2024-11-20 09:31:19.736140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:24.853 [2024-11-20 09:31:19.736163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:24.853 [2024-11-20 09:31:19.736185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:24.853 [2024-11-20 09:31:19.736207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:24.853 [2024-11-20 09:31:19.736228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:24.853 [2024-11-20 09:31:19.736248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:24.853 [2024-11-20 09:31:19.736269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:24.853 [2024-11-20 09:31:19.736289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:24.853 [2024-11-20 09:31:19.736312] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:24.853 [2024-11-20 09:31:19.736337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:24.853 [2024-11-20 09:31:19.736360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:24.853 [2024-11-20 09:31:19.736383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:24.853 [2024-11-20 09:31:19.736405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:24.853 [2024-11-20 09:31:19.736428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:24.853 [2024-11-20 09:31:19.736449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:24.853 [2024-11-20 09:31:19.736470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:24.854 [2024-11-20 09:31:19.736491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:24.854 [2024-11-20 09:31:19.736513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:24.854 [2024-11-20 09:31:19.736702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:24.854 [2024-11-20 09:31:19.736726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:24.854 [2024-11-20 09:31:19.736774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:24.854 [2024-11-20 09:31:19.736795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:24.854 [2024-11-20 09:31:19.736816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:24.854 [2024-11-20 09:31:19.736838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.736872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:24.854 [2024-11-20 09:31:19.736896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.803 ms 00:34:24.854 [2024-11-20 09:31:19.736918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.792482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.792617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:24.854 [2024-11-20 09:31:19.792698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.422 ms 00:34:24.854 [2024-11-20 09:31:19.792728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.792883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.792919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:34:24.854 [2024-11-20 09:31:19.792948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:34:24.854 [2024-11-20 09:31:19.792972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.859981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.860101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:24.854 [2024-11-20 09:31:19.860138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.782 ms 00:34:24.854 [2024-11-20 09:31:19.860161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.860301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.860331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:24.854 [2024-11-20 09:31:19.860355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:24.854 [2024-11-20 09:31:19.860386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.860692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.860732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:24.854 [2024-11-20 09:31:19.860757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:34:24.854 [2024-11-20 09:31:19.860778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.860893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.860923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:24.854 [2024-11-20 09:31:19.860946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:34:24.854 [2024-11-20 09:31:19.860982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.884386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.884463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:24.854 [2024-11-20 09:31:19.884487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.339 ms 00:34:24.854 [2024-11-20 09:31:19.884500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.884783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.884815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:34:24.854 [2024-11-20 09:31:19.884832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:34:24.854 [2024-11-20 09:31:19.884844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.918589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.918683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:34:24.854 [2024-11-20 09:31:19.918708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.712 ms 00:34:24.854 [2024-11-20 09:31:19.918722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:24.854 [2024-11-20 09:31:19.932228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:24.854 [2024-11-20 09:31:19.932277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:34:24.854 [2024-11-20 09:31:19.932305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:34:24.854 [2024-11-20 09:31:19.932318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.179 [2024-11-20 09:31:20.018725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.179 [2024-11-20 09:31:20.018851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:34:25.179 [2024-11-20 09:31:20.018884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.289 ms 00:34:25.179 [2024-11-20 09:31:20.018898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.179 [2024-11-20 09:31:20.019238] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:34:25.179 [2024-11-20 09:31:20.019485] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:34:25.179 [2024-11-20 09:31:20.019720] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:34:25.179 [2024-11-20 09:31:20.019918] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:34:25.179 [2024-11-20 09:31:20.019938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.179 [2024-11-20 09:31:20.019952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:34:25.179 [2024-11-20 09:31:20.019967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.904 ms 00:34:25.179 [2024-11-20 09:31:20.019980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.179 [2024-11-20 09:31:20.020136] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:34:25.179 [2024-11-20 09:31:20.020159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.179 [2024-11-20 09:31:20.020178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:34:25.179 [2024-11-20 09:31:20.020192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:34:25.179 [2024-11-20 09:31:20.020204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.179 [2024-11-20 09:31:20.041260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.179 [2024-11-20 09:31:20.041330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:34:25.179 [2024-11-20 09:31:20.041352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.017 ms 00:34:25.179 [2024-11-20 09:31:20.041365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.179 [2024-11-20 09:31:20.053870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.179 [2024-11-20 09:31:20.053919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:34:25.179 [2024-11-20 09:31:20.053937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:34:25.179 [2024-11-20 09:31:20.053950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.179 [2024-11-20 09:31:20.054131] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:34:25.179 [2024-11-20 09:31:20.054466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.179 [2024-11-20 09:31:20.054497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:25.179 [2024-11-20 09:31:20.054512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:34:25.179 [2024-11-20 09:31:20.054525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.757 [2024-11-20 09:31:20.741172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.757 [2024-11-20 09:31:20.741260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:25.757 [2024-11-20 09:31:20.741285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 685.383 ms 00:34:25.757 [2024-11-20 09:31:20.741301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.757 [2024-11-20 09:31:20.746528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.757 [2024-11-20 09:31:20.746571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:25.757 [2024-11-20 09:31:20.746588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:34:25.758 [2024-11-20 09:31:20.746600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.758 [2024-11-20 09:31:20.746948] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:34:25.758 [2024-11-20 09:31:20.746985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.758 [2024-11-20 09:31:20.747000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:25.758 [2024-11-20 09:31:20.747014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:34:25.758 [2024-11-20 09:31:20.747025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.758 [2024-11-20 09:31:20.747071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.758 [2024-11-20 09:31:20.747090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:25.758 [2024-11-20 09:31:20.747104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:25.758 [2024-11-20 09:31:20.747116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:25.758 [2024-11-20 09:31:20.747175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 693.052 ms, result 0 00:34:25.758 [2024-11-20 09:31:20.747234] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:34:25.758 [2024-11-20 09:31:20.747368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:25.758 [2024-11-20 09:31:20.747389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:34:25.758 [2024-11-20 09:31:20.747403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.136 ms 00:34:25.758 [2024-11-20 09:31:20.747415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.310053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.310127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:34:26.323 [2024-11-20 09:31:21.310150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 561.507 ms 00:34:26.323 [2024-11-20 09:31:21.310163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.315177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.315220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:34:26.323 [2024-11-20 09:31:21.315237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.042 ms 00:34:26.323 [2024-11-20 09:31:21.315250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.315727] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:34:26.323 [2024-11-20 09:31:21.315766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.315780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:34:26.323 [2024-11-20 09:31:21.315794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:34:26.323 [2024-11-20 09:31:21.315805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.315927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.315947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:34:26.323 [2024-11-20 09:31:21.315961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:34:26.323 [2024-11-20 09:31:21.315979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.316032] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 568.799 ms, result 0 00:34:26.323 [2024-11-20 09:31:21.316091] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:26.323 [2024-11-20 09:31:21.316108] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:34:26.323 [2024-11-20 09:31:21.316123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.316136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:34:26.323 [2024-11-20 09:31:21.316149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1262.034 ms 00:34:26.323 [2024-11-20 09:31:21.316161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.316208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.316223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:34:26.323 [2024-11-20 09:31:21.316243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:34:26.323 [2024-11-20 09:31:21.316265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.330643] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:34:26.323 [2024-11-20 09:31:21.330847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.330868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:34:26.323 [2024-11-20 09:31:21.330885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.556 ms 00:34:26.323 [2024-11-20 09:31:21.330898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.331714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.331747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:34:26.323 [2024-11-20 09:31:21.331769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.655 ms 00:34:26.323 [2024-11-20 09:31:21.331781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.334216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.334260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:34:26.323 [2024-11-20 09:31:21.334277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.397 ms 00:34:26.323 [2024-11-20 09:31:21.334290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.334351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.334368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:34:26.323 [2024-11-20 09:31:21.334381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:34:26.323 [2024-11-20 09:31:21.334400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.334559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.334577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:34:26.323 [2024-11-20 09:31:21.334591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:34:26.323 [2024-11-20 09:31:21.334603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.334639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.334671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:34:26.323 [2024-11-20 09:31:21.334686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:26.323 [2024-11-20 09:31:21.334698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.334747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:34:26.323 [2024-11-20 09:31:21.334767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.334780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:34:26.323 [2024-11-20 09:31:21.334792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:34:26.323 [2024-11-20 09:31:21.334805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.334884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:26.323 [2024-11-20 09:31:21.334902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:34:26.323 [2024-11-20 09:31:21.334916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:34:26.323 [2024-11-20 09:31:21.334929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:26.323 [2024-11-20 09:31:21.336364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1646.511 ms, result 0 00:34:26.323 [2024-11-20 09:31:21.351598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.323 [2024-11-20 09:31:21.367788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:34:26.323 [2024-11-20 09:31:21.378177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:34:26.323 Validate MD5 checksum, iteration 1 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:26.323 09:31:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:34:26.581 [2024-11-20 09:31:21.548162] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:26.581 [2024-11-20 09:31:21.548402] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84541 ] 00:34:26.838 [2024-11-20 09:31:21.762048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:26.838 [2024-11-20 09:31:21.919300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.733  [2024-11-20T09:31:24.784Z] Copying: 445/1024 [MB] (445 MBps) [2024-11-20T09:31:25.041Z] Copying: 847/1024 [MB] (402 MBps) [2024-11-20T09:31:26.415Z] Copying: 1024/1024 [MB] (average 419 MBps) 00:34:31.295 00:34:31.295 09:31:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:34:31.295 09:31:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:33.826 Validate MD5 checksum, iteration 2 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7080b9b78dd472d2ecca2dab0bd85540 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7080b9b78dd472d2ecca2dab0bd85540 != \7\0\8\0\b\9\b\7\8\d\d\4\7\2\d\2\e\c\c\a\2\d\a\b\0\b\d\8\5\5\4\0 ]] 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:34:33.826 09:31:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:34:33.826 [2024-11-20 09:31:28.645900] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:33.826 [2024-11-20 09:31:28.646055] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84615 ] 00:34:33.826 [2024-11-20 09:31:28.818468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.083 [2024-11-20 09:31:28.953499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.981  [2024-11-20T09:31:31.664Z] Copying: 431/1024 [MB] (431 MBps) [2024-11-20T09:31:32.242Z] Copying: 846/1024 [MB] (415 MBps) [2024-11-20T09:31:33.613Z] Copying: 1024/1024 [MB] (average 417 MBps) 00:34:38.493 00:34:38.493 09:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:34:38.493 09:31:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=46a1202557923cf62d7509ffd04236c0 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 46a1202557923cf62d7509ffd04236c0 != \4\6\a\1\2\0\2\5\5\7\9\2\3\c\f\6\2\d\7\5\0\9\f\f\d\0\4\2\3\6\c\0 ]] 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:34:41.018 09:31:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84494 ]] 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84494 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84494 ']' 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84494 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84494 00:34:41.018 killing process with pid 84494 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84494' 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84494 00:34:41.018 09:31:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84494 00:34:42.389 [2024-11-20 09:31:37.212162] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:34:42.389 [2024-11-20 09:31:37.231205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.389 [2024-11-20 09:31:37.231277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:34:42.389 [2024-11-20 09:31:37.231300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:34:42.389 [2024-11-20 09:31:37.231314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.231348] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:34:42.390 [2024-11-20 09:31:37.235030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.235065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:34:42.390 [2024-11-20 09:31:37.235081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.659 ms 00:34:42.390 [2024-11-20 09:31:37.235100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.235361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.235380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:34:42.390 [2024-11-20 09:31:37.235395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.234 ms 00:34:42.390 [2024-11-20 09:31:37.235408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.236819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.236861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:34:42.390 [2024-11-20 09:31:37.236878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.387 ms 00:34:42.390 [2024-11-20 09:31:37.236890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.238107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.238282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:34:42.390 [2024-11-20 09:31:37.238310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.164 ms 00:34:42.390 [2024-11-20 09:31:37.238323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.251206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.251252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:34:42.390 [2024-11-20 09:31:37.251270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.835 ms 00:34:42.390 [2024-11-20 09:31:37.251291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.258079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.258124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:34:42.390 [2024-11-20 09:31:37.258141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.744 ms 00:34:42.390 [2024-11-20 09:31:37.258153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.258273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.258294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:34:42.390 [2024-11-20 09:31:37.258308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:34:42.390 [2024-11-20 09:31:37.258320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.271351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.271513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:34:42.390 [2024-11-20 09:31:37.271537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.977 ms 00:34:42.390 [2024-11-20 09:31:37.271555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.285514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.285933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:34:42.390 [2024-11-20 09:31:37.285966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.866 ms 00:34:42.390 [2024-11-20 09:31:37.285980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.298597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.298673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:34:42.390 [2024-11-20 09:31:37.298696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.535 ms 00:34:42.390 [2024-11-20 09:31:37.298708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.310609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.310663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:34:42.390 [2024-11-20 09:31:37.310681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.811 ms 00:34:42.390 [2024-11-20 09:31:37.310693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.310735] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:34:42.390 [2024-11-20 09:31:37.310760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:42.390 [2024-11-20 09:31:37.310777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:34:42.390 [2024-11-20 09:31:37.310790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:34:42.390 [2024-11-20 09:31:37.310804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:42.390 [2024-11-20 09:31:37.310989] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:34:42.390 [2024-11-20 09:31:37.311001] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 7eda8db2-5b88-4c93-905a-930b03859a93 00:34:42.390 [2024-11-20 09:31:37.311014] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:34:42.390 [2024-11-20 09:31:37.311026] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:34:42.390 [2024-11-20 09:31:37.311037] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:34:42.390 [2024-11-20 09:31:37.311050] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:34:42.390 [2024-11-20 09:31:37.311061] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:34:42.390 [2024-11-20 09:31:37.311075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:34:42.390 [2024-11-20 09:31:37.311086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:34:42.390 [2024-11-20 09:31:37.311097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:34:42.390 [2024-11-20 09:31:37.311108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:34:42.390 [2024-11-20 09:31:37.311119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.311140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:34:42.390 [2024-11-20 09:31:37.311161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:34:42.390 [2024-11-20 09:31:37.311174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.390 [2024-11-20 09:31:37.328412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.390 [2024-11-20 09:31:37.328461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:34:42.390 [2024-11-20 09:31:37.328479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.184 ms 00:34:42.391 [2024-11-20 09:31:37.328491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.391 [2024-11-20 09:31:37.328993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:42.391 [2024-11-20 09:31:37.329012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:34:42.391 [2024-11-20 09:31:37.329026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.465 ms 00:34:42.391 [2024-11-20 09:31:37.329038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.391 [2024-11-20 09:31:37.385824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.391 [2024-11-20 09:31:37.386131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:34:42.391 [2024-11-20 09:31:37.386273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.391 [2024-11-20 09:31:37.386327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.391 [2024-11-20 09:31:37.386627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.391 [2024-11-20 09:31:37.386778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:34:42.391 [2024-11-20 09:31:37.386884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.391 [2024-11-20 09:31:37.386932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.391 [2024-11-20 09:31:37.387187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.391 [2024-11-20 09:31:37.387272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:34:42.391 [2024-11-20 09:31:37.387384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.391 [2024-11-20 09:31:37.387436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.391 [2024-11-20 09:31:37.387557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.391 [2024-11-20 09:31:37.387624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:34:42.391 [2024-11-20 09:31:37.387696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.391 [2024-11-20 09:31:37.387769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.391 [2024-11-20 09:31:37.504432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.391 [2024-11-20 09:31:37.504567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:34:42.391 [2024-11-20 09:31:37.504592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.391 [2024-11-20 09:31:37.504607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.596194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.596285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:34:42.649 [2024-11-20 09:31:37.596306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.596319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.596468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.596488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:42.649 [2024-11-20 09:31:37.596502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.596514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.596576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.596594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:42.649 [2024-11-20 09:31:37.596615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.596640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.596812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.596832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:42.649 [2024-11-20 09:31:37.596846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.596859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.596912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.596930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:34:42.649 [2024-11-20 09:31:37.596943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.596962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.597013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.597029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:42.649 [2024-11-20 09:31:37.597041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.597052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.597110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:34:42.649 [2024-11-20 09:31:37.597127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:42.649 [2024-11-20 09:31:37.597145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:34:42.649 [2024-11-20 09:31:37.597157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:42.649 [2024-11-20 09:31:37.597318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 366.074 ms, result 0 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:44.032 Remove shared memory files 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84261 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:44.032 ************************************ 00:34:44.032 END TEST ftl_upgrade_shutdown 00:34:44.032 ************************************ 00:34:44.032 00:34:44.032 real 1m38.539s 00:34:44.032 user 2m19.016s 00:34:44.032 sys 0m26.673s 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.032 09:31:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@14 -- # killprocess 76797 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@954 -- # '[' -z 76797 ']' 00:34:44.032 Process with pid 76797 is not found 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@958 -- # kill -0 76797 00:34:44.032 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76797) - No such process 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76797 is not found' 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84747 00:34:44.032 09:31:38 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84747 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 84747 ']' 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.032 09:31:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:44.032 [2024-11-20 09:31:39.005559] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:44.032 [2024-11-20 09:31:39.006018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84747 ] 00:34:44.288 [2024-11-20 09:31:39.194945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.288 [2024-11-20 09:31:39.370985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.218 09:31:40 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.218 09:31:40 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:45.218 09:31:40 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:45.783 nvme0n1 00:34:45.783 09:31:40 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:45.783 09:31:40 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:45.783 09:31:40 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:46.040 09:31:40 ftl -- ftl/common.sh@28 -- # stores=8e3a5fa2-79e4-4af2-8701-f1dcc6f2bbad 00:34:46.040 09:31:40 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:46.040 09:31:40 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e3a5fa2-79e4-4af2-8701-f1dcc6f2bbad 00:34:46.297 09:31:41 ftl -- ftl/ftl.sh@23 -- # killprocess 84747 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 84747 ']' 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@958 -- # kill -0 84747 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@959 -- # uname 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84747 00:34:46.297 killing process with pid 84747 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84747' 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@973 -- # kill 84747 00:34:46.297 09:31:41 ftl -- common/autotest_common.sh@978 -- # wait 84747 00:34:48.822 09:31:43 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:48.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:48.822 Waiting for block devices as requested 00:34:48.822 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:48.822 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:48.822 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:49.159 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:54.416 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:54.416 09:31:49 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:54.416 09:31:49 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:54.416 Remove shared memory files 00:34:54.416 09:31:49 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:54.416 09:31:49 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:54.416 09:31:49 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:54.416 09:31:49 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:54.416 09:31:49 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:54.416 ************************************ 00:34:54.416 END TEST ftl 00:34:54.416 ************************************ 00:34:54.416 00:34:54.416 real 12m10.252s 00:34:54.416 user 15m12.789s 00:34:54.416 sys 1m41.044s 00:34:54.416 09:31:49 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.416 09:31:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:54.416 09:31:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:54.416 09:31:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:54.416 09:31:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:54.416 09:31:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:54.416 09:31:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:54.416 09:31:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:54.416 09:31:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:54.416 09:31:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:54.416 09:31:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:54.416 09:31:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:54.416 09:31:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:54.416 09:31:49 -- common/autotest_common.sh@10 -- # set +x 00:34:54.416 09:31:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:54.416 09:31:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:54.416 09:31:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:54.416 09:31:49 -- common/autotest_common.sh@10 -- # set +x 00:34:55.790 INFO: APP EXITING 00:34:55.790 INFO: killing all VMs 00:34:55.790 INFO: killing vhost app 00:34:55.790 INFO: EXIT DONE 00:34:56.048 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:56.612 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:56.612 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:56.612 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:56.612 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:56.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:57.127 Cleaning 00:34:57.127 Removing: /var/run/dpdk/spdk0/config 00:34:57.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:57.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:57.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:57.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:57.127 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:57.127 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:57.385 Removing: /var/run/dpdk/spdk0 00:34:57.385 Removing: /var/run/dpdk/spdk_pid57729 00:34:57.385 Removing: /var/run/dpdk/spdk_pid57970 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58205 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58309 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58365 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58494 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58523 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58732 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58844 00:34:57.385 Removing: /var/run/dpdk/spdk_pid58957 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59079 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59193 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59232 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59269 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59345 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59441 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59924 00:34:57.385 Removing: /var/run/dpdk/spdk_pid59999 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60082 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60103 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60257 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60273 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60421 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60443 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60512 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60536 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60605 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60623 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60824 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60866 00:34:57.385 Removing: /var/run/dpdk/spdk_pid60955 00:34:57.385 Removing: /var/run/dpdk/spdk_pid61149 00:34:57.385 Removing: /var/run/dpdk/spdk_pid61244 00:34:57.385 Removing: /var/run/dpdk/spdk_pid61297 00:34:57.385 Removing: /var/run/dpdk/spdk_pid61780 00:34:57.385 Removing: /var/run/dpdk/spdk_pid61878 00:34:57.385 Removing: /var/run/dpdk/spdk_pid61993 00:34:57.385 Removing: /var/run/dpdk/spdk_pid62046 00:34:57.385 Removing: /var/run/dpdk/spdk_pid62077 00:34:57.385 Removing: /var/run/dpdk/spdk_pid62161 00:34:57.385 Removing: /var/run/dpdk/spdk_pid62797 00:34:57.385 Removing: /var/run/dpdk/spdk_pid62839 00:34:57.385 Removing: /var/run/dpdk/spdk_pid63359 00:34:57.385 Removing: /var/run/dpdk/spdk_pid63457 00:34:57.385 Removing: /var/run/dpdk/spdk_pid63572 00:34:57.385 Removing: /var/run/dpdk/spdk_pid63630 00:34:57.385 Removing: /var/run/dpdk/spdk_pid63656 00:34:57.385 Removing: /var/run/dpdk/spdk_pid63687 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65567 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65710 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65714 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65737 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65781 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65785 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65797 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65847 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65851 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65863 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65913 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65918 00:34:57.385 Removing: /var/run/dpdk/spdk_pid65930 00:34:57.385 Removing: /var/run/dpdk/spdk_pid67335 00:34:57.385 Removing: /var/run/dpdk/spdk_pid67442 00:34:57.385 Removing: /var/run/dpdk/spdk_pid68863 00:34:57.385 Removing: /var/run/dpdk/spdk_pid70611 00:34:57.385 Removing: /var/run/dpdk/spdk_pid70691 00:34:57.385 Removing: /var/run/dpdk/spdk_pid70766 00:34:57.385 Removing: /var/run/dpdk/spdk_pid70877 00:34:57.385 Removing: /var/run/dpdk/spdk_pid70969 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71069 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71150 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71231 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71342 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71438 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71535 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71615 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71690 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71800 00:34:57.385 Removing: /var/run/dpdk/spdk_pid71892 00:34:57.386 Removing: /var/run/dpdk/spdk_pid71992 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72072 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72144 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72254 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72346 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72453 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72526 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72603 00:34:57.386 Removing: /var/run/dpdk/spdk_pid72678 00:34:57.643 Removing: /var/run/dpdk/spdk_pid72761 00:34:57.643 Removing: /var/run/dpdk/spdk_pid72876 00:34:57.643 Removing: /var/run/dpdk/spdk_pid72967 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73062 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73147 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73224 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73298 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73378 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73487 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73578 00:34:57.643 Removing: /var/run/dpdk/spdk_pid73722 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74012 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74054 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74527 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74707 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74806 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74916 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74964 00:34:57.643 Removing: /var/run/dpdk/spdk_pid74989 00:34:57.643 Removing: /var/run/dpdk/spdk_pid75279 00:34:57.643 Removing: /var/run/dpdk/spdk_pid75335 00:34:57.643 Removing: /var/run/dpdk/spdk_pid75425 00:34:57.643 Removing: /var/run/dpdk/spdk_pid75847 00:34:57.643 Removing: /var/run/dpdk/spdk_pid75999 00:34:57.643 Removing: /var/run/dpdk/spdk_pid76797 00:34:57.643 Removing: /var/run/dpdk/spdk_pid76940 00:34:57.643 Removing: /var/run/dpdk/spdk_pid77127 00:34:57.643 Removing: /var/run/dpdk/spdk_pid77230 00:34:57.643 Removing: /var/run/dpdk/spdk_pid77595 00:34:57.643 Removing: /var/run/dpdk/spdk_pid77874 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78234 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78452 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78584 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78650 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78805 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78836 00:34:57.643 Removing: /var/run/dpdk/spdk_pid78911 00:34:57.643 Removing: /var/run/dpdk/spdk_pid79126 00:34:57.643 Removing: /var/run/dpdk/spdk_pid79380 00:34:57.643 Removing: /var/run/dpdk/spdk_pid79802 00:34:57.643 Removing: /var/run/dpdk/spdk_pid80262 00:34:57.643 Removing: /var/run/dpdk/spdk_pid80699 00:34:57.643 Removing: /var/run/dpdk/spdk_pid81222 00:34:57.643 Removing: /var/run/dpdk/spdk_pid81370 00:34:57.643 Removing: /var/run/dpdk/spdk_pid81481 00:34:57.643 Removing: /var/run/dpdk/spdk_pid82158 00:34:57.643 Removing: /var/run/dpdk/spdk_pid82239 00:34:57.643 Removing: /var/run/dpdk/spdk_pid82694 00:34:57.643 Removing: /var/run/dpdk/spdk_pid83103 00:34:57.643 Removing: /var/run/dpdk/spdk_pid83634 00:34:57.643 Removing: /var/run/dpdk/spdk_pid83762 00:34:57.643 Removing: /var/run/dpdk/spdk_pid83815 00:34:57.643 Removing: /var/run/dpdk/spdk_pid83886 00:34:57.643 Removing: /var/run/dpdk/spdk_pid83953 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84023 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84261 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84346 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84421 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84494 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84541 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84615 00:34:57.643 Removing: /var/run/dpdk/spdk_pid84747 00:34:57.643 Clean 00:34:57.643 09:31:52 -- common/autotest_common.sh@1453 -- # return 0 00:34:57.643 09:31:52 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:57.643 09:31:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:57.643 09:31:52 -- common/autotest_common.sh@10 -- # set +x 00:34:57.902 09:31:52 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:57.902 09:31:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:57.902 09:31:52 -- common/autotest_common.sh@10 -- # set +x 00:34:57.902 09:31:52 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:57.902 09:31:52 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:57.902 09:31:52 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:57.902 09:31:52 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:57.902 09:31:52 -- spdk/autotest.sh@398 -- # hostname 00:34:57.902 09:31:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:58.159 geninfo: WARNING: invalid characters removed from testname! 00:35:30.217 09:32:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:30.217 09:32:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:32.135 09:32:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:35.419 09:32:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:37.961 09:32:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:41.247 09:32:35 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:43.776 09:32:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:43.776 09:32:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:43.776 09:32:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:43.776 09:32:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:43.776 09:32:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:43.776 09:32:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:43.776 + [[ -n 5292 ]] 00:35:43.776 + sudo kill 5292 00:35:43.785 [Pipeline] } 00:35:43.803 [Pipeline] // timeout 00:35:43.809 [Pipeline] } 00:35:43.827 [Pipeline] // stage 00:35:43.833 [Pipeline] } 00:35:43.847 [Pipeline] // catchError 00:35:43.855 [Pipeline] stage 00:35:43.858 [Pipeline] { (Stop VM) 00:35:43.873 [Pipeline] sh 00:35:44.153 + vagrant halt 00:35:48.338 ==> default: Halting domain... 00:35:53.684 [Pipeline] sh 00:35:53.959 + vagrant destroy -f 00:35:57.311 ==> default: Removing domain... 00:35:57.890 [Pipeline] sh 00:35:58.167 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:35:58.176 [Pipeline] } 00:35:58.190 [Pipeline] // stage 00:35:58.196 [Pipeline] } 00:35:58.211 [Pipeline] // dir 00:35:58.216 [Pipeline] } 00:35:58.232 [Pipeline] // wrap 00:35:58.239 [Pipeline] } 00:35:58.252 [Pipeline] // catchError 00:35:58.262 [Pipeline] stage 00:35:58.264 [Pipeline] { (Epilogue) 00:35:58.278 [Pipeline] sh 00:35:58.558 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:05.130 [Pipeline] catchError 00:36:05.132 [Pipeline] { 00:36:05.146 [Pipeline] sh 00:36:05.492 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:05.492 Artifacts sizes are good 00:36:05.500 [Pipeline] } 00:36:05.514 [Pipeline] // catchError 00:36:05.524 [Pipeline] archiveArtifacts 00:36:05.531 Archiving artifacts 00:36:05.636 [Pipeline] cleanWs 00:36:05.647 [WS-CLEANUP] Deleting project workspace... 00:36:05.647 [WS-CLEANUP] Deferred wipeout is used... 00:36:05.652 [WS-CLEANUP] done 00:36:05.654 [Pipeline] } 00:36:05.668 [Pipeline] // stage 00:36:05.673 [Pipeline] } 00:36:05.687 [Pipeline] // node 00:36:05.692 [Pipeline] End of Pipeline 00:36:05.742 Finished: SUCCESS